October 10, 2002
AKMA on the Day
AKMA‘s blogging of the day at the DigitalID World conference is succinct, nuanced and finds the actually important points on which to comment. No blogarrhea on AKMA’s site, unlike, um, here.
October 10, 2002
AKMA‘s blogging of the day at the DigitalID World conference is succinct, nuanced and finds the actually important points on which to comment. No blogarrhea on AKMA’s site, unlike, um, here.
I moderated this session and thus (ironically?) am unable to blog about its content in a trustworthy way. (Nathan Torkington has blogged a rough transcript of the session.)
The session was way too short and I found it frustrating not to be able to drive issues very deeply. My take-away was pretty depressing, from my point of view: there is a generally shared assumption here (as far as I can tell) that of course DRM is a technology issue, that so long as the technology allows for the enforcement of a wide range of usage policies, then the technology is itself neutral. But it’s not. The transfer of the application of rules from humans to machines is not neutral. Software has no judgment. It is incapable of judging context or intention. We thus are going to get for digital content the dystopia we’ve been imagining for 100 years: an absolutist bureaucracy that believes that the perfect world is the one in which rules are enforced perfectly.
The DRM Paradox
I asked why we don’t have DRM yet and one of the panelists said that it was because users are happy with things the way they are. I wanted to say – but didn’t because I was the moderator – that you get a paradox if you put that together with what Doc Searls said yesterday: DRM won’t take off until someone builds something that users actually want. Well, the market has spoken. DRM is a constriction. We don’t want it. So it can only come into existence by being imposed, for it is doing to users something that we don’t want done to us.
That’s not to say that if the market wants free CDs, it should get free CDs. But it does at least mean that claiming that DRM is a user service is a crock. If it’s something that, for the sake of establishing a sustainable marketplace, has to be crammed down the throat of users, then let’s at least admit it. (But, see the next point…)
The DRM Fallacy
“The technology merely enables users and vendors to negotiate a license.” “This is purely opt-in. If a user doesn’t like a license agreement, he doesn’t have to say yes.”
These would be good arguments if the market weren’t already skewed by an OS monopoly and a content cartel. “Opting out” of seeing Hollywood movies is like opting out of our culture. We can always be media hermits. Some choice.
(Chris RageBoy Locke, my friend and co-author, says that this focuses too much on Mass Media and ignores the many voices that will come from the grass roots. Definitely. But although mass media may not be the only source, it is one that people will continue to care about.)
Craig is CTO, Advanced Strategies and Policy for Microsoft. He’s going to build the case for digital identity as central to the progress of the computer industry and the progress of computing itself.
He says MSFT got interested in “trusted computing” because they realized that if people no longer trusted their computers, they’d stop using computers. (Oddly, the notion that content owners could have access to what’s on my computer makes me much less trustful.)
The “core tenets” of trustworthy computing, he says, are security, privacy, reliability and business integrity (i.e., the relationship of the vendor to the consumer).
Digital identity is a “building block” of trustworthy computing. It involves all four tenets. To get people to accept it — and “MS identity technologies are opt-in by philosophy” — they will have to be educated about the benefits.
[“Opt-in” is a relative term. If in the future I need to use Microsoft Palladium to download Hollywood content, it is opt-in only if I’m willing to opt out of the entertainment mainstream.]
A MSFT product manager is about to give us a demo of a future feature of Passport that will tell you how secure the password you just created is. (Presumably, if you try to use your user name as a password or the words “password” or “god,” the thing reaches out and physically slaps you.)
Now we’re seeing a demo of how MSN8 enables parents to set up controls for what their children can do online. E.g., the parent can specify the subject areas of pages the child can see; it looked like a list of about 50 topics. Of course, filtering by topic is notoriously and conceptually unreliable. So, MSN lets the kid send an email to her parents asking for permission. (The MSFT guy calls it a “feedback loop.”)
Craig is back. The demo was, in short, a waste of time.
He’s now working on showing how deep and complex the issue of digital IDs is. IDs are needed not only for humans but also the identity of the machine and the software.
Digital Identity has many layers, he says:
a. Identity: collections of attributes, some provided by user, some inferred
b. Authorization: This user can perform certain activities
c. Personally identifical information: True name
d. Pseudonymity: Aliases, screen names, etc.
As computers more deeply networked, DigitalID becomes even more necessary for the computers are more intimate with one another.
Good job of complexifying the issue. (I’m not being snarky. It’s good to see how deep the waters are.)
Craig is announcing the “Passport Shared Source Release,” releasing the code that enables other companies to integrate with Passport. Big deal. As fellow blogger Frank Paynter, sitting next to me, just whispered, Craig referred to this as “enhancing the ecosystem,” which makes sense if you assume that Microsoft is the ecosystem.
Craig is wrapping up by talking about “future scenarios,” that is, places where the digital ID tentacles need to reach. It’s a pretty grainy list: Secure extranets and e-government, sure, but “DRM of corporate and personal documents” is ominous in its implications. Of course, this is no surprise. There is an inexorable logic to DRM: MP3s today, your email tomorrow. Of course, the obstacle to DRM of individual documents isn’t the lack of digital IDs but the difficulty (impossibility?) of designing a user interface that doesn’t feel like your office has been taken over by kafka-esque, form-wielding bureaucrats. Indeed, Craig is concluding that we will indeed see an increase in regulation over more and more of our lives, and we will have to work hard to come to the right balance.
And, he points out, this is a trans-national issue because of the nature of government.
Craig is obviously thoughtful, smart and reasonable. And it is reasonable for operating system companies to be involved in digital ID issues. That they should be locus of these efforts still strikes me as inappropriate as allowing providers of media pipes to control the content that flows through them.
[Phil Windley, CIO of Utah, has also blogged Mundie’s speech.]
(Still blogging from the DigitalID World conference. The aggregation of blogs is here.)
Esther Dyson, the moderator, begins the morning general session by asking the conference organizer why there’s no list of attendees. (Actually, she began by graciously saying to this audience of 250 that in a few years, 800 people will claim to have been at the first Digital ID World conference.)
The panelists are Michael Calhoun, Principal of CSC Global Health Solutions and Nikolaj Nyholm, CTO of ASCIO Technologies.
The topic is how policy and practicality collide. Everyone, Esther says, is in favor of privacy and people controlling their own information. But what about transparency, i.e., people knowing the business model, the use of the data, who requested it, etc.?
Michael talks about HIPAA, federal legislation passed in ’96 that required the Dept. of Social Services to come up with regulations for handling patient information across the health care industry. It says that the individual owns his/her health information, possibly to a degree that can be burdensome to good sense. And the data is “tagged” so that if it’s given to a third party, there are still agreements about how it can be used.
Nikolaj says that this is the most European privacy policy he’s heard of in the US. Generally in Europe, he says, the data just isn’t collected. Stores don’t even collect phone numbers from customers.
Esther summarizes nicely: Europe is a bureaucratic culture while the US is a legalistic one. Nikolaj says that US law is very binary: you’ll get your ass sued if you get it wrong. Just as security isn’t binary, he says, – it’s all about risk management – so, too, HIPAA will not provide binary, perfect privacy but can help enforce it.
Esther, the acknowledged master of moderating panels, has now asked all the people lined up to state their questions before the panel addresses them. As a result, themes emerge and some questions that are not as helpful will undoubtedly fall by the wayside. Nice technique.
Jon Udell of InfoWorld asks if HIPAA can actually be implemented and if it will be extraordinarily expensive. Michael replies that the answer is Yes to both questions. The “drop dead” date for implementation is April ’03 and it’s going forward.
Esther’s closing comments: Most of the information about her hasn’t been gathered from transactions. It’s what has been written about her, what she’s written, emails, etc. “In the future, we’ll all have about as much privacy as a rock star.” We need transparency to enable us to fight against the invasions we don’t want.
Good session.
October 9, 2002
Here’s a clear (and depressing) account of Larry Lessig’s day in court.
This session (at the DigitalID World conference) was entirely conversational which is great to listen to but hard to summarize. Phil Becker, the moderator, is doing a terrific job of keeping the flow going.
Now he’s pushing on Microsoft as having the worst reputation on which to build trust relationships. The Microsoft guy replies that they’re trying to do all the right things, that Palladium lets the user control her/his own privacy, and Media Player 9 asks you out of the box how you want to set your privacy settings. (I’ve already written too much today about Palladium, but touting the settings screen of Media Player is really pretty lame, even as part of a list of steps Microsoft has taken. If Microsoft really wanted to do the right thing, it would support standards and independent authentication systems.)
An audience member asks: Can I delete my Passport identity from the Microsoft database if I want to? The Microsoft guy replies Yes (I think that’s what he said), but points out that deleting an ID raises a whole new set of issues such as: How do you recreate an identity you deleted by mistake? (Why not just put up a bunch of warnings to make sure you’re not deleting it by mistake?)
An audience member says that the biggest thing eroding consumer trust is spam because we have the sense that our personal information is being picked up and used. This doesn’t seem right to me. Sure, spam means that your email address is being shopped, but the spam is so random that it’s clear the spammers know so little about who I am that I feel my time has been invaded but not my privacy.
Audience member: Does Microsoft get value from Passport? Microsoft’s answer: It enables Web services; MSN needs an authentication service; Microsoft sells more servers because of it.
Good question, and it was a crisp, blunt answer; Microsofties are usually really likeable. Yet the answer didn’t convince me that there are no other reasons why Microsoft wants to be in the identity management business. The reply was short, of course, so maybe there’s more to be said. Nevertheless, Microsoft getting into the trust business is a new definition of chutzpah. That they don’t see this is continually surprising for a company so savvy about customers.
This panel was over my head technically, but I got my money’s worth from a single comment from Doc Searls, the moderator. He said that none of this identity stuff will happen until someone comes up with an application that isn’t pushed on users but that users actually want. I thought I heard some glubs as various software strategies sank under the weight of their own presumption.
If you’re interested in the Open Source Identity topic, Denise Howell has blogged the session well.
There is a discussion board where you can talk about the ridiculous and wrong things I’m saying.
David Isenberg in a posting to a mail list notes that one of the headlines calls the case against the Sonny Bono Copyright Law that Larry Lessig today argued in front of the Supreme Court “Sonny v. Share.”
Already too much to blog. I’m sitting in a session I came late to because I was engrossed in a conversation with Peter Biddle, a leader of the Microsoft Palladium team. He’s an engineer and a partisan and able to remain good-natured even while being, um, pounded. Gives as good as he gets. I felt like we actually got down to some basic issues. Thank you, Peter: you’re a good guy.
Palladium is a Microsoft initiative that will bring high security to your PC. It will act as a vault for contents and enable users and “content providers” to negotiate terms of usage. Palladium is neutral about those usage terms. It’ll enforce any that are agreed upon.
I’m reluctant to present Peter’s point of view since he has been thinking about this for a long and is quite eloquent. Nevertheless, I’m going to. As I understood the conversation, we got down to this: Peter is designing Palladium to be neutral to usage policies but also capable of enforcing them. So, if Eminem says that you can download his new song but you can play it once for $5 and ten times for $10, then, fine, Palladium will Make It So. And if Joe Mahoney says you can download his new song and play it as much as you like, but you can’t resell it digitally, then that’s just as fine. Palladium is neutral to policy. Yet, once terms are agreed upon, it builds that policy into the computing architecture (as Denise Howell put it during our conversation).
And that’s one of two problems I have with Palladium. The real world is enriched by the leeway that’s inevitable in it. Even as we assert our “intellectual property” rights over our ideas and expressions, we know that in the real world those rights are often unenforceable. For example, a report from Forrester Research may have “Do not photocopy” at the bottom of every page, but you won’t get sued if you run off a couple of copies of the graph on page 110 to use at an internal meeting. We make these decisions all the time, and the world is richer for it. Further, as Denise pointed out, when there are infractions worth prosecuting, we have human judges and a legal system that comes to reasonable (usually) decisions. Implementing policy in silicon drives the leeway out of the system.
My second problem is that Palladium may be neutral in its architecture but it is being born into a world that isn’t neutral. Content has been locked up by gigantic, greedy, stupid companies (mainly headquartered near Hollywood) and the company producing Palladium has been declared a monopoly. If Palladium becomes the only way that the entertainment industry can sell digital content according to strictly enforced rules of usage, then Microsoft will become the de facto entertainment player, forging the “unholy alliance” that so many of us fear. The fact that Microsoft is not committed to producing Palladium across multiple platforms in a timely way is certainly unsettling.
There was nothing that I said, with the help of Denise, that Peter hasn’t heard before. His response was surprising to me. Palladium won’t really lock down content that well, he said. Pirates will still be able to get unlocked copies of whatever they want. You’ll still be able to find a copy of the latest Eminem song to download because some hacker somewhere will crack the encryption. My response was that Palladium will eliminate the gray area so that those who download a song will have to become pirates. Peter, of course, thinks that we’re already pirates, so that got nowhere.
Ultimately, I think you have to ask what world will be better, one with enforceable usage rights that drive out the leeway and hard-codify fair use, or one in which there’s reasonable (and even unreasonable) leeway where some genuine piracy happens, a lot of genuine cash-for-use happens, and a whole bunch in between goes on at every level of society.
My preference is obvious. Unfortunately, that doesn’t mean that it’s right and it sure doesn’t mean that it’s persuasive.
Phil Windley, CIO of Utah is talking about the complex ways in which governments deal with ioentities, from birth certificates to death certificates.
He points out that vehicle titles are in a one-to-one relationship with the vehicles: each vehicle has one and only one title. Why? Because people want government to track this. But government has abdicated its responsibility as an issuer of digital signatures, which is why they’re not as useful as they should be.
But (Phil says) people want the services that a government-based identity program could bring. For example, supose you move to Utah. There’s a long list of things you have to do, from registering your car to enrolling to vote to getting tax information. Utah wants to give you a single site where you say you’re moving to the state, pay one tax, and everything gets done. But this is hard to do because the various information apps are not connected. (In Utah, your name can be in over 200 different databases.)
Phil just netted it out:
Governments are in the identity business but don’t recognize it.
Governments have abdicated their responsibility.
Digital certificates are not the answer.
Citizens are fearful of government collection of data but still demand the connected services that require that aggregation.
The real problem is that these are public policy questions and technology can’t solve them, Phil says.
FWIW, Phil comes across as smart, honest, open, passionate and likeable…the model of what you want a government person to be.