logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

June 10, 2011

[hyperpublic] The risks and beauty of hyperpublic life

Jeff Jarvis moderates a discussion. “We need principles to inform the architecture” for where we want to go, rather than waiting for terms of service from the sites we use. We need a discussion about terms of service for the Internet. (He’s careful to note that he’s not suggesting there be a single terms of service for the Net.) We all have a stake in the discussion of the public and private, Jeff says. We should be careful about our metaphors, he says, citing Doc Searls’ cautioning against calling it a medium since a medium is something that can be owned and controlled. “It is a platform for publics,” Jeff says.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Adam Greenfield, director of Urbanscale, a shop in NYC. He’s interested in how you design public spaces. He wants to push back against the idea of a networked city of the future. We already live in the networked city. Locative and declarative meedia (e.g., FourSquare) is widely adopted. We live among declarative objects, not just people declaring their position, e.g., TowerBridge tweets its open and closed states. In Tokyo, the side of a building is a QR code; it is an object with an informational shadow. Objects are incresaingly capable of gathering, processing, displaying, transmitting and/or taking action on info…which implies new modes of surveillance. His contention: Tens of millions of people are already living in these conditions, and we therefore need a new theory of networked objects.


He offers a taxonomy of what this class of objects implies. He begins with traffic signals controlled by motion detectors. The info is not uploaded to the network, and it has a clear public good.It is prima facie unobjectionable.


Then there is the mildly disruptive and disrespectful object. E.g., a sensor detects someone passing by a billboard that reacts to that presence. There’s no public good here. More concerning is a touch screen vending machine that makes gender assumptions based on interpretation of a camera image. Further, that info is gathered and used.


Another step into the disturbing: Advertisers scan faces and make predictive and prospectively normal assumptions that they sell to marketers.


But what about when power and knowledge resides in an ensemble of discrete things. E.g., in Barcelona, access to some streets depends on a variety of sensors and signs. It can even reside in code: surveillance camera sw gets upgraded with referendum.


We should be talking about public objects: Any discrete obj in the common spatial domain intended for the use and enjoyment of the general public [with a couple of refinements that went by to fast]. They should be open as in API and that their goods are non-rivalrous and non-excludable. This is great, but we should remember that this increases the “attack surface” for hostile forces. Also, we need to evolve the etiquettes and protocols of precedence and deconfliction. We should do this because it moves against the capture of public space by private entities, and it opens up urban resources like we’ve never seen (discoverable, addressable, queryable and scriptable). The right to the city should be underwirtten by the architecture of its infrastructure.


Q: [jeff] Why is the gender-sensing vending machine is creepy? Would you be ok if it guessed but let you correct it or ignore it?
A: I’ve been working with informed consent, but I heard this morning that that may not be the best model. We don’t want to over-burden people with alert boxes, etc.


Jeffrey Huang talks about a case study: the design of a new campus in the deserts of Ras Al Khaimah (one of the Emirates). In 2009, the Sheikh agreed to fund the creation of the new campus. Jeff and others were brought in to design the campus, from bare sand up. The initial idea for the project was to avoid creating a typical gated campus, but rather to make it open. They also wanted to avoid the water costs of creating a grass-lawned campus. “The ambition was to grow a campus where it made sense ecologically”: buildings where there’s natural cooling winds, etc. They’re designing large, fluid, open spaces, where “seeing and being seen is maximized.” There would be a network of sensors so that campus would be aware of what’s going on inside, including recognizing where individuals are. People’s profile info could be projected into their shadows. They wonder if they need special places of privacy. “There should be less necessity to design the private if and only if the hyperpublicness is adequately designed.” E.g. if no one owns the data, there’s full transparency about who looks at the data and what’s being captured.


Betsy Masiello from Google’s policy team gives some informal remarks. To her, a hyperpublic life implies Paris Hilton: Constant streaming, making your behavior available for everyone to see. But, she says, what this panel is really about is a data-driven life. It’s important not to blur the two. There’s public good that comes from big data analysis, and some baseline economic good.


She says she thinks about predictive analytics in two ways. 1. Analysis done to give you predictions about what you might like. It’s clear to the user what’s going on. 2. Predictions based on other people’s behavior. E.g., search, and Adam’s soda machine. Both create value. But what are there risks? The risk is a hyperpublic life. The risk of all this data is that it gets attached to us, gets re-identified, and gets attached to your identity. But this misses something…


E.g., she came across a Jonathan Franzen 1988 essay, “The Imperial Bedroom.” “Without shame there can be no distinction between public and private,” he wrote. She says you can feel shame even if you’re anonymous, but Franzen is basically right. Which brings her to a positive solution. “The design problem is how to construct and identify multiple identities, and construct and manage some degree of anonymity.” It is true that our tech will allow us to identify everyone, but policy requirements could punish companies from doing so. Likewise, there are some policy decisions that would make it easier to maintain multiple identities online.


Jeff: Your fear of re-identification surprises me.
Betsy: The hyperidentity public is created from the lack of contexts, without people knowing about it. People don’t know how all their contexts are becoming one. I think people want a separation of data used to personalize an ad from data they are sharing from their friends.
Jeff: This is John Palfrey’s “breakwalls”… But I’d think that Google would want as few restrictions as possible. They create liabilities for Google.
Betsy: That’s the design challenge. Search engines and Japanese soda machines haven’t gotten it right yet.
Jeff: What are the emerging principles. Separating gathering from usage. Control. Transparency…


Adam: I don’t there’s anonymous data any more.
Betsy: Yes, but could we create it via policy?
Adam: There are some fine uses of predictive analytics. E.g., epidemiology. But not when the police use it to predict crimes.
Jeff: Why not? Ok, we’ll talk later.


Q: What about third party abuse?
Adam: Our principle should be “First, do no harm.”
Jhoung: It’s a problem often because the systems don’t know enough. Either roll it back, or train it so it can make better distinctions.
Betsy: You can maybe get. FourSquare is an individual stating her identity. The flip is anonymous data about locations. That provides tremendous value, and you can do that while protecting the identities.
Jeff: But if you can’t protect, don’t collect it, then we’ll never collect anything and won’t get any of those benefits.


Q: [latanya] It’s not true that only those with something to hide want to remain anonymous. E.g., if you hide all the positive results of HIV tests, you can see who has HIUV. You have to protect the privacy of those who do not have HIV.
Jeff: But I got benefit from going public with my prostate cancer.
Latanya: But we live in a world of parallel universes. You got to control which ones knew about your cancer.


Q: [I could’t hear it]
Betsy: You don’t need to reveal anything about the individual pieces of data in a big data set in order to learn from it.


Q: (jennie toomey) There are lots of things we want kept private that have nothing to do with built or shame. Much of what we keep private we use to create intimacy.
Betsy: I was quoting Franzen.


Q: Privacy means something different in non-democratic societies.
Adam: We know historically that if info can be used against us, it eventually will be.


Q: Recommended: Solove’s The Taxonomy of Privacy
Adam: The info collected by E. Germany was used against people after E. Germany fell.
Jeff: But if only listen to the fears, we won’t get any of the benefits.

Tweet
Follow me

Categories: liveblog Tagged with: hyperpublic • privacy Date: June 10th, 2011 dw

Be the first to comment »

[hyperpublic] Panel 2 :Experience and re-creation

Jeffrey Schnapp introduces the second panel.

Beatriz Colomina gives a brief talk called “Blurred Vision: Architectures of Surveillance.” [I continue to have difficult hearing due to the room’s poor acoustics and my own age-appropriate hearing loss. Also, Beatriz talks very fast.] She begins with a photo of a scene framed by windows. Comm is about bringing the outside in. So is glass; glass has taken over more of the building. She points to skyscrapers made of out of glass that have an x-ray aesthetic. It is no coincidence that glass houses and X-rays occur at the same time, she says. X-rays exposed the inside of the body to public eye, while architecture was disclosing the inside of the house to the public eye. X-rays acclimatized us to living in glass houses, including the glass house of blogging. Beatriz talks about architecture that looks further inward, through more and more layers, beyond transparency [I lack acoustic confidence that I’m getting this right. sorry.] With our surveillance equipment, x-ray vision is becoming pervasive, changing the definition of the private.

danah boyd gives a talk: “Teen privacy strategies in networked publics.” She begins by explaining she’s an ethnographer. How do young people think about privacy? The myth is that they don’t care about it, but they do. They care about it but they also participate in very public places. Just because they want to participate in a public doesn’t meant they want to be public. Being active in a public does not mean they want everything to be public to everyone.

Networked publics are publics that are enabled by network technologies, and that are simultaneously spaces constructed through tech and an imagined communities. We are becoming public by default, and private by default. danah quotes at 17yr-old who explains that rather than negotiating publics to make things available one by one, she posts in a public space so it’s available all of them.

New strategies are emerging. Privacy = ability to control a social situation, and to have agency to assert control over those situations. A 14yr old danah interviewed thinks that he’s signalling the social norms in his communications, but people comment inappropriately, so he’s started using some explicit social structures. Another young person deletes comments to her posts after she’s read them, and deletes her own comments on other people’s posts the next day. She’s trying to make the structure work for her.

A 17yr-old likes her mother but feels her mother over-reacts to FB posts. So, when the teen broke up with her boyfriend, she posted the lyrics from “Always look on the bright side of life.” This is social stenography, i.e., hiding in plain sight, for that song is from the crucifixion scene in The Life of Brian.

danah points to an online discussion of a social fight. The kids knew the details. The adults did not know if they were allowed to ask. The kids’ careful use of pronouns controlled access to meaning.

Sometimes we can use the tech, and sometimes we have to adopt social norms. In all of our discussion of privacy about the role of law, tech, and the market, we ought to pay careful attention to the social norms they’re trying to overrule. (She hat tips Lessig for these four.)

Ethan Zuckerman talks about the role of cute cats. Web 1.0 was about sharing info. Web 2.0 is about sharing photos of kittens. This has important implications for activists. The tools for kitten sharing are effective for activists. They’re easy to use, they’re pervasively viral, and there’s tremendous cost to a totalitarian regime trying to censor them because they have to throw out the cute cats with the revolutionary fervor. It raises the cost of censorship.

Ethan says that cute cats have a deep connection to activism. What happened in a dusty little town of 40,000 spread throughout Tunisia, spread because of cute-cat social media. Protests happen, they get filmed and posted on Facebook. FB is pervasive, but makes it extremely find to the content, make sense of it, and translate it. So, local people find it and make sense of it, and feed it to Al Jazeera. Now people can see the events and decide if they want to join in.

Why FB? Because Tunisia has blocked just about everything except FB. They tried to block it in 2008, which resulted in a 3x increase, because Tunisians inferred there was something good about FB. The day before stepping down, Tunisia’s leader offered three concessions: It won’t fire on crowds, it will lower the tax on bread, and it will allow Net freedom.

Tunisia confirms Ethan’s theory, but Egypt is counter-evidence. The Egypt government shut off the Internet. China is manufacturing its own cute cats: you can post all the kitten vids you want on Chinese sites. “This is a much more effective way of combating the cute cat theory.” But it’s expensive and requires a huge amount of human labor to review.

What worries Ethan most is that we’re moving our public discourse into private spaces, e.g. FB and Google. “We’re leaving it up to the owners of these spaces whether we’ll be allowed to use these spaces for political purposes.” It’s not that these spaces are evil. Rather, these digital spaces have been designed for other purposes. They have an incentive to shut down profiles in response to complaints, especially when it’s in a different language. Also, the terms of service are often violated by activist content. And real name identity is often dangerous for activists.

Organizations are slowly but surely figuring out how to deal with this. But it’s slow and very difficult. E.g., video of the army deliberately killing unarmed civilians. These videos violate YouTube’s terms of service ;. But YouTube made an exception, putting up a warning that it’s disturbing video. This is great, but it holds out some basic tensions. For example, it’s not good for advertisers and thus runs against YouTube’s business model.

The challenge is that we have invented these tools to have a certain set of behaviors. We wanted friends to be able to exchange info, and we create terms of service for that. Now we’ve allowed those privately held spaces to become our networked public spheres. But the lines between private and public are not well suited for political and activist discourse. Do we ask corporations to continue hosting these, or do we try to come up with alternatives. We didn’t drive people to YouTube because they were good for activists , but for the other cute cat reasons. Now we have to figure out the right tools.

Q: (zenep) Value of real name policies?
Ethan: It may be that we need public interest regulation of some of the policies of these corporations.
danah: Our tech will make real names no longer the best and only way to identify you. Systems of power will be able to identify people, and no amount of individual hiding within a collective will work. We need to rethink our relation to power as individuals and collectives.

Paul: We shouldn’t forget that it’s not just corporations. It’s American corporations. to shut down WikiLeaks you just need Visa and Mastercard.
Ethan: WikiLeaks is vulnerable to credit card platforms because DDoS attacks made it move off its own platform to Amazon’s, and Amazon is vulnerable to such pressure. The Amazons have special responsibilities. Also, we’re now advising activists to always make sure there’s an English-language description of your material when you put it up on YouTube, etc., so that the YouTube admins can evaluate the take-down claims that arise.

Jeff Jarvis: Regulation is the wrong way. The question what is the def of a public space for public speech. Other than lobbying private corps, what’s the right way?
Ethan: Rebecca MacKinnon’s upcoming book, Consent of the Networked, argues that we need to have a revolutionary moment in which the users of these spaces rise up these spaces and use the companies that are open to supporting them. Ultimately though, we don’t have a way to do a FB in a decentralized fashion. We can’t have a networked conversation without having some degree of centrality.
danah: Corporations have incentives that sometimes align with users’. There’s a lot of power when users think about alignment. Sometimes it’s about finding common interests, or social norms at a legal or social. It’s good to find those points of alignment.

Q: danah, have you seen designs that are more conducive to people following social norms?
danah: The design question can miss the way in which the tech is used in various contexts. E.g., you can design in tons of privacy, but nothing stops a parent from looking over the shoulder of a child. People will adjust if they understand the design. Design becomes essentially important when there are changes. It’s important for designers to figure out how to tango with users as the design evolves.

Q: [tim from facebook] Every design for any networked system has consequences. The choice that has always bedevilled me: the same system that finds fake accounts for activists also identifies fake accounts from secret police. How do we avoid building systems that create a pseudo sense of privacy?
ethan: People do things with social platforms that we never intended. Admirable or dangerous. How to figure out? It’s got to be an ongoing process. But, as danah says, changing those decisions can be dangerous and disruptive. We need to have some way of opening up that process. The activist community should be involved in evolving the terms of service so that it doesn’t recognize just the legitimate law enforcement, but also recognizes the needs of activists and citizens. It should not just be a process for lawyers but also for citizens.
danah: What is the moral environment in which we want to live. What outs activists can also out human traffickers. Some of the hardest questions are ahead of us.

Tweet
Follow me

Categories: culture, liveblog, peace Tagged with: architecture • design • hyperpublic • privacy Date: June 10th, 2011 dw

Be the first to comment »

[hyperpublic] First panel: Delineating public and private

First panel at HyperPublic conf. Hurriedly typed and not re-read.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Paul Dourish: Think of privacy not so much as something that people have, but something that people do. “What are people doing when they are doing public, or doing private?” Think of doing privacy as one of the ways of engaging with a group.

And pay attention to the multiple publics we deal with when encountering media objects. When we encounter a media object we think “this is aimed at people like me.” Publics = complicated sameness and difference. For example, for a couple of years, he looked at paroled sex offenders in California who are being tracked with GPS. How do you think about space if you have to first worry about coming with 2,000 feet of a school, library, etc.? That reconfigures the scale at which public space is encountered: since it’s impossible to navigate at a level of 2,000 feet, these people think about which towns are safe for them. Instead of privacy, it helps to think in terms of our accountability to others.

Jonathan Zittrain suggests an iphone app that shows up map routes that take account of the sex offenders’ rule of avoiding schools, etc. He also raises the relation of privacy and identity.

Laurent Stalder mentions work on what privacy meant within a house in the 1880s in England. Artifacts were introduced that affected privacy, from sliding doors to doorbells. Then he shows a 2008 floor plan that distinguishes much less between public and private, inside and outside — the rise of a differentiated set of threshold devices. What is the role of the architect when spaces are filled with an endless stream of people, information, fluids…? Laurent points to the continual renegotiation of borders and their consistency. [I had trouble hearing some of the talk; the room does not have good acoustics. Nor do my ears.] In converation with JZ, Larent contrasts two Harvard buildings, one of which has a clear inside and outside, and another that has a long transitional state.

John Palfrey says that lawyers are so engaged in the question of privacy because they too are designers, but of rule-sets. Lawyers have not done a great job in determining which rule-set about privacy will enable us to thrive. He makes three points: 1. The importance of human experience in these spaces. We are public by default, he says, crediting danah boyd. We’re learning that though we often trade convenience for control, we care about in particular contexts, a changing set of practices. 2. The old tools haven’t worked well for us with privacy. E.g., the 4th Amendment doesn’t fit the cyber world well. 3. The systems that tend to work best are highly interoperable wit one another; we don’t want to type in the same info into multple systems. Open, interoperable systems succeed. But that gives rise to privacy problems. We need places — breakwalls — where the data can be either slowed or stopped.

JZ points out that JP is, like Laurent, talking about having long thresholds.

JZ imagines a world in which many people “lifestream” their lives and we are able to do a query to see who was where at just about any time. That makes Google StreetView’s photo-ing of houses seem like nothing, he says.

In response to Jeff Jarvis’ question, Paul reminds us that the social takes up the architectural, so that the same threshold space (or any space) can take on different privacy norms for different cultures and sub-cultures.

JZ: Architectural spaces last for decades or centuries. Online spaces can be reconfigured easily. The “house” your moved into can be turned into something different by the site’s owners. E.g., Facebook tinkers with the space you use by changing

Q: What is the purpose of the threshold?
Laurent: Connection and separation
Q: Don’t we want some type of digital threshold that does the job of introducing, transitioning, informing, introducing, etc. “You keep some of where you were in where you are.” The lack of that affects identity and more.
JZ: You can imagine a web site that shows you where other people are visiting from. “Wow, a lot of folks are coming from AOL. This must not be a cool site.” :)
Paul: It’s important to historicize sites appropriately so we understand where they came from.

Me: It’s possible to misuse architectural spaces, because architecture is always intensely local. So, will privacy norms ever settle down in the global Web?
Invention of the chimney enabled privacy in homes, as opposed to central fire. [Having trouble hearing] Will the poor not have Internet privacy, while the affluent do?
As important as the Net spaces are the spaces in which people use the Net. E.g., Net cafes in the developing world. Access and capital change publicness and privacy.
Paul: In China, people go to public spaces to play online games. (He says that they consider World of Warcraft as a Chinese game in its values.) There certainly won’t be global agreements about privacy norms. Nor does there have to be, because your encounters wit hthem always occur in local settings.
JZ: And within these spaces can be communities their own norms.

Tweet
Follow me

Categories: liveblog Tagged with: architecture • hyperpublic • privacy • public Date: June 10th, 2011 dw

Be the first to comment »

[hyperpublic] Judith Donath

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Urs Gasser opens the conference by reflecting on the recent Swiss court decision that Google StreetView has to go to extraordinary lengths to obscure personal identifiers (like faces and racial identity), especially in front of “sensitive” public areas.

Judith Donath points to our increased discomfort with the new lines between public and private, while in an environment where it’s not only hard to separate them, but where the old well-defined norms don’t work. This is not solely a problem of the online world, she points out. How do we understand this new public as designers? In the online world, you can be public while sitting alone in your room.

She says she was one of the initiators of this interdisciplinary conference (also: Jeff Huang) because different fields have different ideas of what is desirable. E.g., lawyers traditionally think that privacy is a goal in itself.

Judith says we’re looking at this topic at a time in human history when we’ve had an almost unprecedented amount of privacy; e.g., we are more mobile and thus can shed our prior public selves. We have also been more isolated and alienated: we can live without engaging with others, in a city of strangers, in a workplace where all our ties are weak, etc.

She reminds us that during the day we should be thinking about how what we learn can be applied to help build a better civil society.

Tweet
Follow me

Categories: culture Tagged with: hyperpublic • privacy • public Date: June 10th, 2011 dw

Be the first to comment »

May 19, 2011

Rebooting library privacy

The upcoming HyperPublic conference has posted a provocation I wrote a while ago but didn’t get around to posting, on rebooting library privacy now that we’re in the age of social networks. (Ok, so the truth is that I didn’t post it because I don’t have a lot of confidence in it.) Here’s the opening couple of subsections:

Why library privacy matters

Without library privacy, individuals might not engage in free and open inquiry for fear that their interactions with the library will be used against them.

Library privacy thus establishes libraries as a sanctuary for thought, a safe place in which any idea can be explored.

This in turn establishes the institution that sponsors the library — the town, the school, the government — as a believer in the value of free inquiry.

This in turn establishes the notion of free, open, fearless inquiry as a social good deserving of support and protection.

Thus, the value of library privacy scales seamlessly from the individual to the culture.

Privacy among the virtues

Library privacy therefore matters, but it has never been the only or even the highest value supported by libraries.

The privacy libraries have defended most strictly has been privacy from the government. Privacy from one’s neighbors has been protected rather loosely by norms, and by policies inhibiting the systematic gathering of data. For example, libraries do not give each user a private reading booth with a door and a lock; they thus tolerate less privacy than provided by a typical clothing store changing room or the library’s own restrooms. Likewise, few libraries enforce rules that require users to stand so far apart on check-out lines that they cannot see the books being carried by others. Further, few libraries cover all books with unlabeled gray buckram to keep them from being identifiable in the hands of users.

Privacy from neighbors has been less vigorously enforced than privacy from government agents because neighborly violations of privacy are perceived to be less consequential, and because there are positive values to having shared social spaces for reading.

While privacy has been a very high value for libraries, it has never been an absolute value, and is shaded based on norms, convenience, and circumstance.

more…

Tweet
Follow me

Categories: libraries Tagged with: dpla • libraries • privacy Date: May 19th, 2011 dw

2 Comments »

November 30, 2010

[bigdata] Panel: A Thousand Points of Data

Paul Ohm (law prof at U of Colorado Law School — here’s a paper of his) moderates a panel among those with lots of data. Panelists: Jessica Staddon (research scientist, Google), Thomas Lento (Facebook), Arvin Narayanan (post-doc, Stanford), and Dan Levin (grad student, U of Mich).

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Dan Levin asks what Big Data could look like in the context of law. He shows a citation network for a Supreme Court decision. “The common law is a network,” he says. He shows a movie of the citation network of first thirty years of the Supreme Court. Fascinating. Marbury remains an edge node for a long time. In 1818, the net of internal references blooms explosively. “We could have a legalistic genome project,” he says. [Watch the video here.]

What will we be able to do with big data?

Thomas Lento (Facebook): Google flu tracking. Predicting via search terms.

Jessica Staddon (Google): Flu tracking works pretty well. We’ll see more personalization to deliver more relevant info. Maybe even tailor privacy and security settings.

Dan: If someone comes to you as a lawyer and ask if she has a case, you’ll do a better job deciding if you can algorithmically scour the PACER database of court records. We are heading for a legal informatics revolution.

Thomas: Imagine someone could tell you everything about yourself, and cross ref you with other people, say you’re like those people, and broadcast it to the world. There’d be a high potential for abuse. That’s something to worry about. Further, as data gets bigger, the granularity and accuracy of predictions gets better. E.g., we were able to beat the polls by doing sentiment analysis of msgs on Facebook that mention Obama or McCain. If I know who your friends are and what they like, I don’t actually have to know that much about you to predict what sort of ads to show you. As the computational power gets to the point where anyone can run these processes, it’ll be a big challenge…

Jessica: Companies have a heck of a lot to lose if they abuse privacy.

Helen Nissenbaum: The harm isn’t always to the individual. It can be harm to the democratic system. It’s not about the harm of getting targeted ads. It’s about the institutions that can be harmed. Could someone explain to me why to get the benefits of something like the Flu Trends you have to be targeted down to the individual level?

Jessica: We don’t always need the raw data for doing many types of trend analysis. We need the raw data for lots of other things.

Arvind: There are misaligned incentives everywhere. For the companies, it’s collect data first and ask questions yesterday; you never know what you’ll need.

Thomas: It’s hard to understand the costs and benefits at the individual level. We’re all looking to build the next great iteration or the next great product. The benefits of collecting all that data is not clearly defined. The cost to the user is unclear, especially down the line.

Jessica: Yes, we don’t really understand the incentives when it comes to privacy. We don’t know if giving users more control over privacy will actually cost us data.

Arvind describes some of his work on re-identification, i.e., taking anonymized data and de-anonymizing it. (Arvind worked on the deanonymizing of Netflix records.) Aggregation is a much better way of doing things, although we have to be careful about it.

Q: In other fields, we hear about distributed innovation. Does big data require companies to centralize it? And how about giving users more visibility into the data they’ve contributed — e.g., Judith Donath’s data mirrors? Can we give more access to individuals without compromising privacy?

Thomas: You can do that already at FB and Google. You can see what your data looks like to an outside person. But it’s very hard to make those controls understandable. There are capital expenditures to be able to do big data processing. So, it’ll be hard for individuals, although distributed processing might work.

Paul: Help us understand how to balance the costs and benefits? And how about the effect on innovation? E.g., I’m sorry that Netflix canceled round 2 of its contest because of the re-identification issue Arvind brought to light.

Arvind: No silver bullets. It can help to have a middleman, which helps with the misaligned incentives. This would be its own business: a platform that enables the analysis of data in a privacy-enabled environment. Data comes in one side. Analysis is done in the middle. There’s auditing and review.

Paul: Will the market do this?

Jessica: We should be thinking about systems like that, but also about the impact of giving the user more controls and transparency.

Paul: Big Data promises vague benefits — we’ll build something spectacular — but that’s a lot to ask for the privacy costs.

Paul: How much has the IRB (institutional review board) internalized the dangers of Big Data and privacy?

Daniel: I’d like to see more transparency. I’d like to know what the process is.

Arvind: The IRB is not always well suited to the concerns of computer scientists. Maybe current the monolithic structure is not the best way.

Paul: What mode of solution of privacy concerns gives you the most hope? Law? Self-regulation? Consent? What?

Jessica: The one getting the least attention is the data itself. At the root of a lot of privacy problems is the need to detect anomalies. Large data sets help with this detection. We should put more effort in turning the date around to use it for privacy protection.

Paul: Is there an incentive in the corporate environment?

Jessica: Google has taken some small steps in this direction. E.g., Google’s “got the wrong bob” tool for gmail that warns you if you seem to have included the wrong person in a multi-recipient email. [It’s a useful tool. I send more email to the Annie I work with than to the Annie I’m married to, so my autocomplete keeps wanting to send Annie I work with information about my family. Got the wrong Bob catches those errors.]

Dan: It’s hard to come up with general solutions. The solutions tend to be highly specific.

Arvind: Consent. People think it doesn’t work, but we could reboot it. M. Ryan Calo at Stanford is working on “visceral notice,” rather than burying consent at the end of a long legal notice.

Thomas: Half of our users have used privacy controls, despite what people think. Yes, our controls could be simpler, but we’ve been working on it. We also need to educate people.

Q: FB keeps shifting the defaults more toward disclosure, so users have to go in and set them back.
Thomas: There were a couple of privacy migrations. It’s painful to transition users, and we let them adjust privacy controls. There is a continuum between the value of the service and privacy: all privacy and it would have no value. It also wouldn’t work if everything were open: people will share more if they feel they control who sees it. We think we’ve stabilized it and are working on simplification and education.

Paul: I’d pick a different metaphor: The birds flying south in a “privacy migration”…

Thomas: In FB, you have to manage all these pieces of content that are floating around; you can’t just put them in your “house” for them to be private. We’ve made mistakes but have worked on correcting them. It’s a struggle of a mode of control over info and privacy that is still very new.

Tweet
Follow me

Categories: too big to know Tagged with: 2b2k • bigdata • facebook • google • privacy Date: November 30th, 2010 dw

1 Comment »

November 17, 2010

[defrag] Maggie Fox on privacy

Maggie Fox [twitter:maggiefox] says we think about privacy wrong.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

We can feel violated when what we thought was private goes into unwanted hands. “Violated” is a strong word, and originally meant someone crossing physical space, coming into your house. Our laws of privacy are all about physical place. “We suck at context. We think here and now is all there is.” Privacy is not universal, she says. The notion that you have a private space that no one else can come into is a Western concept. In fact, ours is an American concept. There is no Russian word for privacy. George Lowenstein’s study showed the cultural basis of privacy. He found that when guarantees of confidentiality were given and people were asked to disclosed things, disclosure dropped by 50%. And the more informal the disclosure statement on a site was, the more they disclosed. People don’t think about privacy unless they’re told to think about it.

Privacy is a new concept, relative to human history. It is not global. Rooted in 18th centure property law. And it’s very squishy (= contextual). And now we’re digital. But most people really aren’t all that interested in privacy. We leave breadcrumbs all the time. “In the digital revolution, that data is incredibly valuable, but not to Big Brother.” “If you’re a spy, you shouldn’t be on Twitter.” Worrying about that is a red herring.

We ought to be much more worried about advertisers’ use of data. Their business model is ending, Maggie says. They want to transition from trying to get all the eyeballs to getting the right eyeballs. There is a market for your data. Your privacy is no longer a place. It is a commodity — something people want to buy. You should worry more about Facebook than Big Brother.

So we need to approach privacy differently. Right now, we treat privacy as something that makes you feel weird when someone violates it, e.g., when your Mom refers to your FB page. But, the marketers aren’t just making you feel weird. They’re taking something from you: your data.

Your data has value, and you ought to extract that value. Advertising recognizes that with profit-sharing, discount, loyalty programs: you can trak me in exchange for something I want.

The big sites like Amazon have value because of the data we’ve given them. Our aggregated data is the information age’s natural resource.

We need to think about privacy differently, Maggie concludes.

Q: [esther dyson] What will a company that create a service that does represent the user?
A: Great question. I don’t have the answer.

Tweet
Follow me

Categories: cluetrain, liveblog, marketing Tagged with: marketing • privacy Date: November 17th, 2010 dw

Be the first to comment »

July 16, 2009

Techcrunch’s RT of @Ev email

Sam Bayard of the Berkman Center’s Citizen Media Law Center has posted an explanation of the legal issues around TechCrunch posting some of the content of the email stolen from Twitter’s founders background.

It seems different to me than when people posted internal messages from Diebold, because there was a clear public interest in the reliability of voting machines. I’m trying to bracket out the sense that Twitter is one of us, but I’m failing. The whole thing makes me feel icky.

Tags: twitter techcrunch diebold privacy media

Tweet
Follow me

Categories: Uncategorized Tagged with: diebold • digital culture • media • privacy • techcrunch • twitter Date: July 16th, 2009 dw

1 Comment »

March 25, 2009

Making it harder to de-anonymize speakers

From a press release:

In a case involving important First Amendment rights, the Citizen Media Law Project (“CMLP”) joined a number of media and advocacy organizations, including Gannett Co., Inc., Hearst Corporation, Illinois Press Association, Online News Association, Public Citizen, Reporters Committee for Freedom of the Press, and Tribune Company, in asking an Illinois appellate court to protect the rights of anonymous speakers online by imposing procedural safeguards before requiring that their identities be disclosed.

The CMLP is a Berkman project. More here…

[Tags: berkman cmlp freedom_of_speech anonymity privacy free_speech ]

Tweet
Follow me

Categories: Uncategorized Tagged with: anonymity • berkman • cmlp • digital rights • privacy Date: March 25th, 2009 dw

3 Comments »

March 16, 2009

Extra Sensory Keyboard Detection

Researchers have discovered ways to pick up your keystrokes by reading tiny scraps of electromagnetic radiation, or with PS2-connected keyboards, just by plugging into the power grid. It turns out Cryptonomicon wasn’t paranoid enough!

[Tags: security ]

Tweet
Follow me

Categories: Uncategorized Tagged with: privacy • security Date: March 16th, 2009 dw

1 Comment »

« Previous Page | Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!