May 30, 2012
Interop: The Book
John Palfrey and Urs Gasser are giving a book talk at Harvard about their new book, Interop. (It’s really good. Broad, thoughtful, engaging. Not at all focused on geeky tech issues.) NOTE: Posted without re-reading
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people. |
[The next day: Nathan Matias has posted a far better live-blog post of this event.)
JP says the topic of interop seems on the face of it like it should be “very geeky and very dull.” He says the book started out fairly confined, about the effect of interop on innovation. But as they worked on it, it got broader. E.g., the Facebook IPO has been spun about the stock price’s ups and downs. But from an interop perspective, the story is about why FB was worth $100B or more when its revenues don’t indicate any such thing. It’s because FB’s interop in our lives make it hard to extract. But from this also come problems, which is why the subtitle of the book Interop talks about its peril.
Likewise, the Flame virus shows that viral outbreaks cannot be isolated easily. We need fire breaks to prevent malware from spreading.
In the book, JP and Urs look at how railroad systems became interoperable. Currency is like that, too: currencies vary but we are able to trade across borders. This has been great for the global economy, but it make problems. E.g., the Greek economic meltdown shows the interdependencies of economies.
The book gives a concise def of interop: “The ability to transfer and render userul data and other information across systems (including organizations), applications or components.” But that is insufficient. The book sees interop more broadly as “The art and science of working together.” The book talks about interop in terms of four levels: data, tech, humans, and institutions.
They view the book as an inquiry, some of which is expressed in a series of case studies and papers.
Urs takes the floor. He’s going to talk about a few case studies.
First, how can we make our cities smarter using tech? (Urs shows an IBM video that illustrates how dependent we are on sharing information.) He draws some observations:
-
Solutions to big societal problems increasingly depend on interoperability — from health care to climate change.
-
Interop is not black or white. Many degrees. E.g., power plugs are not interoperable around the world, but there are converters. Or, international air travel requires a lot of interop among the airlines.
-
Interop is a design challenge. In fact, once you’ve messed up with interop, it’s hard to make it right. E.g., it took a long time to fix air traffic control systems because there was a strongly embedded legacy system.
-
There are important benefits, including systems efficiency, user choice, and economic growth.
Urs points to their four-layer model. To make a smart city, the tech the firefighters and police use need to interop, as do their data. But at the human layer, the language used to vary among branches; e.g., “333” might code one thing for EMTs and another for the police. At the institutional layer, the laws for privacy might not be interoperable, making it hard for businesses to work globally.
Second example: When Facebook opened its APIs so that other apps could communicate with FB, there was a spike in innovation; 4k apps were made by non-FB devs that plug into FB. FB’s decision to become more interoperable led to innovation. Likewise for Twitter. “Much of the story behind Twitter is an interop question.”
Likewise for Ushahidi; after the Haitian earthquake, it made a powerful platform that enabled people to share and accumulate info, mapping it, across apps and devices. This involved all layers of the interop stack, from data to institutions such as the UN pitching in. (Urs also points to safe2pee.org :)
Observations:
-
There’s a cycle of interop, competition, and innovation.
-
There are theories of innovation, including generativity (Zittrain), user-driven innovation (Von Hippel) and small-step innocations (Christensen).
-
Caveat: More interop isn’t always good. A highly interop business can take over the market, creating a de facto monopoly, and suppressing innovation.
-
Interop also can help diffuse adoption. E.g., the transition to high def tv: it only took off once the tvs were were able to interoperate between analog and digital signals.
Example 3: Credit cards are highly interoperable: whatever your buying opportunity is, you can use a selection of cards that work with just about any bank. Very convenient.
Observations:
-
this level of interop comes with costs and risks: identity thefts, security problems, etc.
-
The benefits outweigh the risks
-
This is a design problem
-
More interop creates more problems because it means there are more connection points.
Example 4: Cell phone chargers. Traditionally phones had their own chargers. Why? Europe addressed this by the “Sword of Damocles” approach that said that if the phone makers didn’t get their act together, the EC would regulate them into it. The micro-USB charger is now standard in Europe.
Observations:
-
It can take a long time, because of the many actors, legacy problems, and complexity.
-
It’s useful to think about these issues in terms of a 2×2 of regulation/non-regulation, and collaborative-unilateral.
JP back up. He is going to talk about libraries and the preservation of knowledge as interop problems. Think about this as an issue of maintaining interop over time. E.g., try loading up one of your floppy disks. The printed version is much more useful over the long term. Libraries find themselves in a perverse situation: If you provide digital copies of books, you can provide much less than physical books. Five of the 6 major publishers won’t let libraries lend e versions. It’d make sense to have new books provided on an upon standard format. So, even if libraries could lend the books, people might not have the interoperable tech required to play it. Yet libraries are spending more on e-books, and less on physical. If libraries have digital copies and not physical copies, they are are vulnerable to tech changes. How do we insure that we can continuously update? The book makes a fairly detailed suggestion. But as it stands, as we switch from one format to another over time, we’re in worse shape than if we had physical books. We need to address this. “When it comes to climate change, or electronic health records, or preservation of knowledge, interop matters, both as a theory and as a practice.” We need to do this by design up front, deciding what the optimal interop is in each case.
Q&A
Q: [doc searls] Are there any places where you think we should just give up?
A: [jp] I’m a cockeyed optimist. We thought that electronic health records in the US is the hardest case we came across.
Q: How does the govt conduct consultations with experts from across the US. What would it take to create a network of experts?
A: [urs] Lots of expert networks that have emerged, enabled by tech that fosters from the bottom up human interoperability.
A: [jp] It’s not clear to me that we want that level of consultation. I don’t know that we could manage direct democracy enabled in that way.
Q: What are the limits you’d like to see emerge on interop. I.e., I’m thinking of problems of hyper-coherence in bio: a single species of rice or corn that may be more efficient can turn out to be with one blight to have been a big mistake. How do you build in systems of self-limit?
[urs] We try to address this somewhat in a chapter on diversity, which begins with biodiversity. When we talk about interop, we do not suggest merging or unifying systems. To the contrary, interop is a way to preserve diversity, and prevent fragmentation within diversity. It’s extremely difficult to find the optimums, which varies from case to case, and to decide on which speed bumps to put in place.
[jp] You’ve gone to the core of what we’re thinking about.
Q: Human autonomy, efficiency, and economic growth are three of the benefits you mention, but they can be in conflict with one another. How important are decentralized systems?
[urs] We’re not arguing in favor of a single system, e.g., that we have only one type of cell phone. That’s exactly not what we’re arguing for. You want to work toward the sweet spot of interop.
[jp] They are in tension, but there are some highly complex systems where they coexist. E.g., the Web.
Q: Yes, having a single cell phone charger is convenient. But there may be a performance tradeoff, where you can’t choose the optimum voltage if you standard on 5V. And an innovation deficit: you won’t get magnetic plugs, etc.
[urs] Yes. This is one of the potential downsides of interop. It may lock you in. When you get interop by choosing a standard, you freeze the standard for the future. So one of the additional challenge is: how can we incorporate mechanisms of learning into standards-setting?