May 17, 2008
Inside Esther’s brain
Esther Dyson has posted the ultimately intimate photos: scans of her brain. Here’s one of my favorites:
Notice the eyes that seem to follow you wherever you move…
May 17, 2008
Esther Dyson has posted the ultimately intimate photos: scans of her brain. Here’s one of my favorites:
Notice the eyes that seem to follow you wherever you move…
May 1, 2008
A couple of days ago, a post on a Canadian newspaper’s blog gave me credit for something I didn’t do. Before I could leave a comment correcting the post, the site insisted I register. Registration there is free (in the “no cash changes hands”) sense, but it required me to supply not only my email address and name, but also my sex and age. It also permitted me to enter yet more demographic data, which I declined to do. I didn’t want to have to supply any info, but i really wanted to correct that post. It even made me confirm an email they sent to the address I registered, because, I suppose, otherwise the terrorists have won.
The experience made me worry yet again about the efforts to put individuals in control of their own identity information. That sounds like an unarguable good, since the alternative is unarguably bad: letting others have control over your identity info. But the effect of these good-intentioned efforts will be — I’m afraid — a rapid decrease in personal privacy. For, the personal ID efforts not only give us control over our information, they also make it easy for us to supply it to others. Rather than having to type in our home address yet again, these new ID schemes will enable us to furnish information simply by pressing a button.
Since just about every vendor on the Web would like to know more about you rather than less, why won’t just about every vendor ask for more information rather than less? It’s all just a button press. Of course, you can choose not to deal with vendors who ask for too much info, but most of us will compare that with the post we want to correct, the sweater we want to buy, or the vacation we hope to win, and will just press the button.
We are making it easier to supply personal information without making it harder to ask for it. That should worry us.
Since the efforts to give users control over their personal information will inevitably continue — and the who I know who are involved in this are among the greatest champions of Web openness and personal freedom — here’s a suggestion for making it harder for vendors to ask for more information than they need. Suppose we were to create some rough categories of “asks,” and give them unambiguous names. For example, we could call the ID info that does nothing but verifies that you are who you say you are when buying something the “Credit Card Authorization Swipe.” The “ask” that wants to know your name and email address could be called the “Email ID Swipe.” The one that wants to know your demographics could be the “Marketing Personalization Swipe,” etc. The aim would be to get vendors to use those names with some uniformity, so that we not only would know what we’re giving, but there might be some market pressure (or at least some shame) not to ask for the full demographic roster when someone’s just trying to correct an error in a post. These nomenclature packages could even be graded to indicate how invasive they are.
I’m just thinking out loud here, but if we’re going to make it easy to give out our personal information, we ought to be thinking about the norms, market forces, or rules that would make it harder to ask for that information.
* * *
I’m on the road, so I may be pokey about replying.
April 29, 2008
Chris Conley, a Berkman Fellow working on the Open Net Initiative, is giving a lunchtime talk on “Digital Surveillance and Transparency.” [Note: I am live-blogging, hence typing quickly, missing things — missing many things today, actually — getting things wrong, etc. The session will be available in full at Media Berkman. ]
The Surveillance Project looks for evidence of surveillance. But lots of surveillers don’t talk about what they do, so the project looks at tools and technologies, infrastructure, and the legal and/or political constraints. And it looks at the implications for privacy, civil rights, etc.
A security consultant, Ed Giorgio, said “Privacy and security are a zero-sum game.” But this isn’t necessarily true, says Chris. Disclosure can make surveillance more effective. For example, people may behave more the way you want by letting them know they’re being watched.
Chris goes through the parameters of the question. The effect of transparency depends on what you’re trying to do with surveillance. E.g., Facebook’s Beacon ad program watches what you’re doing, without a lot of transparency, to increase the accuracy of ads. Phorm watches what sites you go to in order to achieve the same aim. Surveillance for security purposes is aiming at preventing actions and may well want to be non-transparent. There’s also the audience to consider: the targets of the surveillance, affected third parties (e.g., victims of botnet infections), and other interested parties. It is, he shows, an equation with lots of variables.
Chris walks through some examples. E.g., if you monitor file sharing, announcing that you’re detecting 5% might have an effect. Or, you might announce that you were detecting all files available via BitTorrent. Or all those who are uploading. Each of these might have a different effect. Does announcing a surveillance program deter terrorists? Perhaps not, and announcing it might enable terrorists to counter the surveillance.
What’s the difference with digital surveillance, Chris asks. You can collect more, from more places, of more types. The legal constraints are often very unclear. The mechanisms are rapidly changing. Private entities are being involved. E.g., OnStar was collecting conversations in cars for policing purposes.
The goal of the project is to argue that surveillance needs oversight, public discussion of the goals, and how those goals can be most narrowly met. Chris ends by pointing at Zimbabwe’s recent law that requires ISPs to wiretap their users. Even though it may not actually be happening, this “transparency” can “be a tool to suppress expression on the cheap.”
Q: In the US, are there laws beyond wiretapping, child porn, and financial data retention, that have caused private companies to alter their data retention processes?
A: There are no data retention laws in the US.
(ethanz) The gap between what may be possible in surveillance and what people perceive to be possible is pretty vast. In the middle east among activists, it’s believed that the entire Net passes through seven servers in DC, and that every communication is monitored. This rumor has attained the status of fact in the developing world. The panopticon effect is orders of magnitude more powerful than what these systems are capable of doing. People will not stop believing this.
Q: How well do the counter-digital-surveillance techniques work?
A: Unclear. If you’re identified as a target in a technologically sophisticated country, there’s very little you can do on line to counter it.
Ethan: In one country, they were listening in through parabolic mics a few doors down. There’s nothing you can do about it in a sufficiently motivated environment.
Chris: The best way to keep yourself unidentified is obfuscation. Talk about your topic when in World of Warcraft.
Do people use steganography?
Roger: It’s a myth that it can’t be detected. You can detect non-random low-order bits in graphics.
Ethan: And if they communicate through Tor, you’re flagging (in many countries) that you’re up to no good.
Ethan: I’d like information so people can make better risk assessments. How good are the surveillers? Are they as good as the “tin hats” think? I doubt it, but it would be good to know. E.g., people in Zimbabwe are dropping off of political humor lists, for fear they’re being watched. People over-estimate the ability of governments to watch us.
Gene: Let me sum up: To stop terrorists we’d also stop activists. We have a false sense of security but also a false chilling effect.
Chris: Yes, from the point of surveillance, terrorists and activists are both people trying to hide their communication.
Gene: If from a policy/legal standpoint there’s no difference…
Chris: In a repressive regime, there’s no difference…
Ethan: It’s a difference between behavioral and content analysis. If we were capable of doing the sort of content analysis that most people think we’re capable of doing, people wouldn’t be scared off from (e.g.) participating in Koranic online discussions to argue against suicide bombing.
April 27, 2008
If you want to chat online with a Comcast support person, they cannot do so unless you give them your social security number.
Lessons to learn:
1. Unless restrained, companies will demand more and more identification from us, because violating our privacy doesn’t cost them anything.
2. We cannot rely on market forces to restrain publicprivate sector ID greed.
3. Comcast continues to lead the field in overall corporate suckage.
February 14, 2008
According to The Times, British school kids will be assigned a unique number that will be associated with their school records and that will follow them for life. Privacy advocates are concerned.
Not to mention that in the UK, when they say this is going on your permanent record, they’ll really mean it.
November 28, 2007
The Wall Street Journal did a poll of 200 Facebook users (which doesn’t sound like a very significant number). The results:
If Facebook could tell your friends what you do on other sites — buying movie tickets, clothes, etc. — when would you want to share that information? Of the 200 respondents, 1.5% chose always, 30.5% chose often, sometimes or rarely and 68% chose never.
November 26, 2007
Alan Patrick of the Broadstuff blog wagers 3:2 that now that the “A List” has weighed in against Facebook’s new ad program, Facebook will drop it.
I’d like to think so (see my post here), but I’d wager 100:1 that Facebook will continue. Of course, I’m neither a bettor nor much of a predictor (remember the glorious eight years of the President Howard Dean administration?), but here’s my thinking:
1. The A-List ain’t what once people thought it was. The folks Alan mentions are influential within the tech community, but they are not the head of the long tail and thus don’t have much direct influence over the broad base of Facebook users. (Alan has me on the list, which makes little sense in terms of readership or influence. But, what the heck. I’m just happy to be on a list.)
2. There has been no great uporoar from Facebook users.
3. Facebook has justifications — rationalizations, in my view — for their decisions. For example, Facebook says if you don’t click on any buttons on the popup that invites you to share news of your purchase, it defaults to “yes” because Facebook wants to encourage users to try the program. Besides, Facebook says with some justice, you have to explicitly click on a “yes” button once you log into Facebook before the news is shared. (Sorry this is confusing. See Ethanz for a clear explanation.) True enough. Nevertheless, this strikes me as an anti-user decision that Facebook wouldn’t have made if it weren’t going to make a gazillion dollars from their ad program.
4. Facebook will make a gazillion dollars from their ad program.
November 14, 2007
[This post is also running at HuffingtonPost.]
With its new advertising infrastructure, Facebook is being careful
to protect privacy of information. But they are bucking — and
perhaps helping to transform — the norms of privacy. At
its most basic, Facebook is getting the defaults wrong.
The new ad infrastructure enables Facebook to extend their reach onto
other companies’ sites. For example, if you rent a copy of “Biodome”
from Blockbuster.com, Blockbuster will look for a Facebook cookie on
your computer. If it finds one, it will send a ping to Facebook. The
Blockbuster site will pop up a “toast” (= popup) asking if you want to
let your friends at Facebook know that you rented “Biodome.” If you say yes, next time you
log into Facebook, Facebook will ask you to confirm that you want to
let your friends know of your recent rental. If you say yes, that
becomes an event that’s propagated in the news feed going to your
friends.
Facebook has also created a new type of entity to allow non-people
to have a presence in the system. So, a company or a character can
now get a “page,” but not a profile. It can have “fans” but not
“friends.” And the fact that you decided to become a fan of Cap’n
Crunch is yet more information advertisers can use against you.
Facebook makes an astounding array of information available to its
advertisers so that they can precisely “target” likely suspects. This
is great for advertisers, and — given that the ad space is going
to be filled up one way or another — it’s arguably better for
users to see ads that are relevant than are irrelevant. (The
counter-argument is that targeting makes ads more successfully
manipulative, not just more relevant.) Facebook is scrupulous,
however, about not letting advertisers know the identity of those to
whom it’s advertising. So, Blockbuster might buy ads for all men aged
18-24 who have joined the Pauly Shore fan club, but Blockbuster
doesn’t know who those people are.
When Facebook talks about preserving user privacy, that’s what they
have in mind: They do not let advertisers tie the information
about you in a profile (your age, interests, etc.) to the
information that identifies you in your profile (your name,
email address, etc.). That is the informational view of privacy, and
Facebook is likely to continue to get that right, if only because so
many governmental agencies are watching them. I also think that the
Facebook folks understand and support the value of maintaining privacy
in this sense.
Yet, I find myself creeped out by this system because Facebook gets
the defaults wrong in two very significant areas.
When Blockbuster gives you the popup asking if you want to let your
Facebook friends know about your rental, if you do not respond in
fifteen seconds, the popup goes away … and a “yes” is sent to
Facebook. Wow, is that not what should happen! Not responding far
more likely indicates confusion or dismissal-through-inaction than
someone thinking “I’ll save myself the click.”
Further, we are not allowed to opt out of the system. At your Facebook
profile, you can review a list of all the sites you’ve been to that
have presented you with the Facebook spam-your-friends option, and you
can opt out of the sites one at a time. But you cannot press a big red
button that will take you out of the system entirely. So, if you’ve
deselected Blockbuster and the Manly Sexual Inadequacy Clinic from the
list, if you go to a new site that’s done the deal with Facebook,
you’ll get the popup again there. We should be allowed to Just Say No,
once and for all.
Why? Because privacy is not just about information. It’s all about
the defaults.
If a couple is walking down the street, engaged in deep and quiet
conversation, it certainly would violate their privacy to focus
listening devices on them, record their conversation, and post it on
the Internet. The couple wold feel violated not only because their
“information” — their conversation — was published but
because they had the expectation that even though their sound waves
were physically available to anyone walking on the street who cared to
listen, norms prevent us from doing so. These norms are social
defaults, and they are carefully calibrated to our social
circumstances: The default for sidewalks is that you are not allowed
to intercede in private conversations except in special circumstances.
The default for showing up at a wedding party is that they can ask
whether you’re with the bride or groom’s party, but they can’t ask you
to show a drivers license. The default at some schools is that your
grades will be posted on a public bulletin board and at others that
they will not. When we violate these norms, various forms of social
opprobrium ensue. We even have special words for different types of
violations: eavesdropping, being nosy, being a blabbermouth, etc.
Facebook is getting privacy right where privacy is taken as a matter
of information transfer. But it is getting privacy wrong as a norm. Our expectation is that our
transactions at one site are neither to be made known to other sites
nor made known to our friends. We may well want to let our friends
know what we’ve bought, but the norm and expectation is that we will
not. Software defaults generally ought to reflect the social defaults. And
when you’re as important as Facebook — two billion page views a
day — your software’s defaults can nudge the social defaults.
Our privacy norms are changing rapidly. They have to because we’ve now
invented so many new ways to be in public. That’s why Facebook’s move
is especially disappointing. Although they are rigorously supporting
informational privacy, they are setting the defaults based not on
what’s best for their users but on what’s best for them. It’s clearly
and inarguably better for users to be able to opt out of the entire
third-party system, but it’s clearly more lucrative for Facebook to
make it hard to opt out (not to mention making it an opt in system).
Businesses always choose sides, implicitly or explicitly. Facebook has
been notable for being on its users’ side. Not in this case. In fact,
because this new ad plan invokes Facebook on other companies’ sites,
it feels like we’re being ganged up on. Even worse, in this case the
gang is so strong, it could reshape privacy’s norms.