AKMA’s ID
Akma, who is one of the funniest serious writers around, jumps into the Norlin Fray. He intuits, correctly in my view, that what’s motivating Eric more than anything is his interest in digital IDs.
Akma ably worries about one side of digital IDs: our persistent reputation on the Internet. What happens, he wonders, when we systematize that? What do we gain and what do we lose? The other side of digital ID, however, is the one that authenticates me in my online transaction. There’s little existential about such an ID. It’s really just a way of assuring that the money that’s about to transfer in fact comes from my real world wallet. Akma sees (or assumes?) a connection between these two:
If DigID is designed for users first, and only subsequently for commercial interests, then users won’t mind (much) sharing DigID with commerce. If DigID is designed for commerce first and thrust upon users, users will resist and evade.
I assume that these two IDs can be kept apart. But I wonder if I’m right.
Blogthread: These are the additional links Akma captures in the current Norlin blogthread: Doc Searls, Mitch Ratcliffe, Kevin Marks and me.
If it weren’t for the possessive, I could have had an all-caps title for this blog entry. Damn!
Categories: Uncategorized dw
Maybe I’m missing something — probably, I’m missing something. But.
Since all of these DigID schemes presume that there is some common schema or standard for describing personal data, do we need an ID “service” at all? Couldn’t this all just live in the browser? I enter all my personal data, once, in the browser. I go to your site. My browser notices that there’s a form whose elements conform to the identity standard and populates the form.
A password/login locker in the browser takes care of password proliferation, and encrypts it with a key that you specify.
I retain 100% control over my data. My computer NEVER submits any data to a vendor without my permission — and I can review, in a browser window, all the data my computer proposes to send off to some vendor. I need retain only one login — the login for my browser — and I can export that data to some standards-defined format that I can readily import into some other device if I need to.
ISTM that the only downside to this model is that there’s no business opportunity to it.
Re Cory Doctorow’s comment-
The catch is that business interests want a way to authenticate that the data that you enter into your browser does in fact correspond to the flesh-and-blood Cory Doctorow sitting in front of the computer, and that you typed it in. They don’t want to trust the person who may (or may not!) be Cory Doctorow typing the data into the computer. They want to have a trusted source who authenticates you, saying, “Yes, we think that that is in fact Cory Doctorow, who actually exists, and you can find him here if something goes wrong with the transaction.” That has all sorts of ramifications. Whoever holds that authority and trust wields great power. Whatever system of technology they use to authenticate you must itself be trustworthy. Different uses for an infrastructure of that sort would have their own social implications.
Commercial interests want a schema not only for describing personal data, but for certifying that it is true and has been entered by the person to whom it belongs. The paths to achieve that sort of goal all appear to have very complicated side effects. From my less-informed position, it looks like people are trying to figure out just what the potential effects of having such a systematic infrastructure would be, what other policy goals are important, and what ways might be available to strike a good balance between all of the different interests and goals. That’s quite a challenge, but it’s one to be taken seriously, and I’m glad that there are smart people working on it.
That *is* a different problem, you’re right. Verifying that the data submitted is correct and true is a LOT harder.
Is there any reason to believe that it’s even tractable? After all, outside of the traditional anti-fraud tactics (no CC shipments except to your CC billing address), any auth/id token given to you by a trust broker is just as subject to fraud as is data originating in your browser.
Ultimately, this is the same problem a Swiss bank has with a secret account: the number of your account is nominally secret. It is also sufficient to get you into the account and to withdraw or deposit to it. If you disclose the secret to someone else, that person has the same access you do. You might have some secondary ID token that you can use to revoke a compromised secret, but even then, the new secret you receive is just as vulnerable in the event that it is compromised.
Identity is socially constructed — why believe that it has technical solutions?
Explicit social construction of ID seem to work quite well – eBay, Slashdot and Advogato being obvious examples.
The PGP web of trust never quite took off because of its hard inheritance – eBay & Slashdot do a good job of extarcting ID, or at least trustworthyness, form noise.
The intersection of social networks and technology is an interesting space indeed.
Kevin, thanks for mentioning Advogato. The trust metric there solves a very different problem than “digital identity” (for those interested, it evaluates membership in a community with a fairly good degree of protection against a large number of false positives), but I think trust metrics are likely to be a useful component of such a system. In fact, trying to build a better PKI was the original motivation for my work. Ultimately, I concluded that it is a very hard problem.
The PGP web of trust is an intriguing concept, but itself flawed to the point of uselessness. It doesn’t compute anything useful automatically, and the information it exports across its UI is really only useful for hardcore cypherpunk nerds. But I don’t think that PGP’s weaknesses have anything bad to say about (signed) peer certificates.
There’s one other problem that I rarely see mentioned, but nonetheless poses a tough obstacle to the actual realization of any of these “digital identity” proposals: PC’s running general purpose operating systems are nowhere near secure enough. If some DigID system were to catch on to the point of being useful for financial transactions, then it becomes a rather tempting target for a virus writer. The solution, I believe, is specialized crypto hardware, optimized for trustworthiness, but this is pretty exotic and expensive by today’s standards.
It’s a fascinating space, and I’m happy to see people putting serious thought into it.
Jeez, bit by bit this thread is inventing Palladium!
The “specialized crypto hardware” I have in mind is very, very different than the Palladium vision. Rather than a chip embedded deep inside a PC, I’m thinking of something portable, with a simple little UI for authenticating things. For important stuff, you’d use the device directly, but for less important things, it’ll cert temporary ID’s hosted on the PC.
I think I need to write up some scenarios for this in more detail. The PC-centric Palladium model has, by virtue of its backing, a lot of mindshare.
Palladium expects 3rd parties to create ID dongles but does not itself provide ID-authenticating hardware.
David is right, if this thread continues then, inch by inch, it will re-invent Palladium. The “bluesky” promise of Digital Identity far trascends mere “auto-form filling”, but extends personal control over the extent to which personal information can be obtained, used, and copied. I agree with Cory that this problem may be intractable, and while I and my company (Ping Identity Corp) are laying out stepping stones towards that goal (e.g. by supporting the Liberty Alliance protocol today), I am by no means convinced we’ll ever get to the envisioned end goal.
I’ve laid out (briefly) my thoughts that “strong” Digital Identity is, in fact, DRM, here:
http://netmeme.org/blog/archives/000023.html“>http://netmeme.org/blog/archives/000023.html”>http://netmeme.org/blog/archives/000023.html
Comments appreciated.
Bryan