logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

October 26, 2015

[liveblog][act-tiac] The nation's front door

Sarah Crane, Dir., Federal Citizen Information Center, GSA., is going to talk about USA.gov. “In a world where everyone can search and has apps, is a web portal relevant?,” she asks.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

When the US web portal (first.gov [archival copy]) was launched in 2000, it had an important role in aggregating and centralizing content. Now people arrive through search.

USA.gov is a platform that offers a full suite of bilingual products, built around a single structured API: information, contacts, social media, etc. All is built around a single API. The presentation layer is independent of the content. Thanks to the API, all the different outputs use the same consistent content.

It’s designed to integrate with other agency content. In fact, they don’t want to be writing any of the content; it should come from other agencies. Also, it’s built to so its modules and content can be reused. And it’s built to scale. It can support expansion or consolidation. E.g., if an initiative loses steam, its content can be pulled in and be kept available.

How people use govt services: They look online, they ask their friends, and they expect it to be easy. People are surprised when it’s complex. Some people prefer in-person help.

So, how does the portal remain relevant?

Customer experience is a core tenant. They recently launched a Customer Experience division. Constant measurement of performance. Fixing what doesn’t work. Clear lines of reporting up to the senior management. The lines of reporting also reach all the way to the devs.

Last year they re-did their personas, based on four different behaviors: 1. Someone who knows exactly what s/he’s looking for. 2. Someone has a general idea, but not informed enough to search. 3. Someone wants to complete a transaction. 4. Someone who wants to contact an elected official. They analyzed the experiences, and did “journey maps”: how someone gets to what she wants. These journeys often include travels into other agencies, which they also mapped.

What’s next for them now that info is cheap and easy to find? Sarah likes Mint.com‘s model:

  • Aggregated, personalized content collected from multiple agencies.

  • Pre-emptive service – alert, etc.

  • Relevant updates as you are in the task.

See Blog.USA.gov, and USA.gov/Explore

Q&A

Q: [me] Are people building on top of your API?

A: Some aspects, yes. Heavily used: the A-Z agency index – the only complete listing of every agency and their contact info. There’s a submission to build a machine-readable org chart of the govt that will build on top of our platform. [OMG! That would be incredible! And what is happening to me that I’m excited about a machine-readable org chart?]

Also if you use bit.ly to shorten a gov’t url, it creates one.usa.gov which you can use to track twitter activity, etc.

Certain aspects of the API are being used heavily, primarily the ones that show a larger perspective.

Q: Won’t people find personal notifications from the govt creepy, even though they like it when it’s Mint or Amazon?

A: The band-aid solution is to make it opt-in. Also being transparent about the data, where it’s stored, etc. This can never be mandatory. The UK’s e-verify effort aims at making the top 20 services digital through a single ID. We’d have to study that carefully We’d have to engage with the privacy groups (eg., EPIC) early on.

Q: Suppose it was a hybrid of automated and manual? E.g., I tell the site I’m turning 62 and then it gives me the relevant info, as opposed to it noting from its data that I’m turning 62.

Q: We’re losing some of the personal contact. And who are you leaving behind?

A: Yes, some people want to talk in person. Our agency actually started in 1972 supplying human-staffed kiosks where people could ask questions. Zappos is a model: You can shop fully online, but people call their customer service because it’s so much fun. We’re thinking about prompting people if they want to chat with a live person.

The earliest adopters are likely to be the millennials, and they’re not the ones who need the services generally. But they talk with their parents.

 


 

I briefly interviewed Sarah afterwards. Among other things, I learned:

  • The platform was launched in July.

  • The platform software is open source.

  • They are finding awesome benefits to the API approach as an internal architecture: consistent and efficiently-created content deployed across multiple sites and devices; freedom to innovate at both the front and back end; a far more resilient system that will allow them to swap in a new CMS with barely a hiccup.

  • I mentioned NPR’s experience with moving to an API architecture, and she jumped in with COPE (create once, publish everywhere) and has been talking with Dan Jacobson, among others. (I wrote about that here.)

  • She’s certainly aware of the “government as platform” approach, but says that that phrase and model is more directly influential over at 18F

  • Sarah is awesome. The people in government service these days!

Tweet
Follow me

Categories: egov, future Tagged with: api • platform Date: October 26th, 2015 dw

Be the first to comment »

[liveblog][act-iac] Innovation in govt

Brian Nordmann (Senior Advisor, Arms Control, Verification and Compliance at U.S. Department of State) begins with the standard disclaimer that he’s not speaking for the Dept. of State. And here’s mine:

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Brian reminds us that “innovation” has become a tired term. There was a govt flurry to innovate, but no one said exactly what “innovation” means. The State Dept. built a structure so there could be quarterly reports on innovation. “If you want to guarantee that you don’t do anything innovative, create a structure for writing reports.” He says this was a shame because in the basement there was already the perfect place for innovation: The Foggy Bottom Cafe — Starbucks, ice cream parlor, etc.. Sect’y Colin Powell went down there every day because he knew in 45 minutes he’d get twenty-five innovative ideas. People sit there and share ideas. “This is how you get innovation done in the gov’t”: Give people the freedom to talk, and the freedom to fail.

A couple of years ago his group started doing public challenges. In the first one, they got 150 entries. They awarded $5K for ideas they would have had to pay hundreds of thousands of dollars for. But then the lawyers discovered they were doing this and wrote a EULA for the site — 28 pages, with buttons strewn throughout that you have to press. Twenty-seven people applied to that year’s challenge.

Brian’s job is simple: Get rid of all nuclear weapons in the world. His office’s job is to come up with ways to verify agreements. In the 1960s, the sensors were physically large: big, expensive, fragile, and now replacement parts don’t exist. A radar installation in the Aleutian Islands looking for nuclear missile launches uses vacuum tubes.

Now their challenge is to explore arms control in the information age. What can we learn from YouTube, Facebook, etc.? But the lawyers say that’s a privacy violation. So, instead they’re investigating the Internet of Things. People don’t mean the same thing by that phrase. Brian means by it: networks with sensors.

Are there things we can do to get the public involved in arms control? It’s a complex issue, but you can simplify it to: What can we do to rid the world of nuclear weapons. Brian holds up a small spectrometer that feeds into a laptop. It’s used to analyze water quality. Another instrument is a rolled-up piece of cardboard that attaches to your phone. And smartphones’ accelerometers can sense earthquakes…and nuclear explosions. They could alert agencies that they need to look at the explosion more closely.

Researchers in Hawaii bought 12 iPhone6’s to explore this, which tripped an alarm at Apple. Apple contacted the researchers. The researchers told Apple that their phones could be used as seismic detectors, and that the iPhone6 degraded that capability. The researchers are trying to broaden Apple’s sense that its phones can be used for more than app delivery.

For innovation, you want to talk to new people, not the same people all the time. By bringing in new people, you’ll get a lot of junk, but also some ideas worth exploring. Hobbyists and startups are generally better to talk with than large companies. Brian spoke with Tom Dolby, ex of MTV, who has a media lab at Johns Hopkins that is working with Baltimore youth. Brian works with a California group teaching Latino kids how to program. Imagine putting them together, along with people from around the world, and create a Teen Summit. Imagine they see what they have in common and what they do not.

Q: How are you communicating to device manufacturers that they are platforms for innovation?

A: People respond to a title that ends with “US Dept. of State.”

Tweet
Follow me

Categories: egov, future Tagged with: innovation • platforms Date: October 26th, 2015 dw

Be the first to comment »

August 18, 2015

Newton’s non-clockwork universe

The New Atlantis has just published five essays exploring “The Unknown Newton”. It is — bless its heart! — open access. Here’s the table of contents:

Rob Iliffe provides an overview of Newton’s religious thought, including his radically unorthodox theology.

William R. Newman examines the scientific ambitions in Newton’s alchemical labors, which are often written off as deviations from science.

Stephen D. Snobelen — who in the course of writing his essay discovered Newton’s personal, dog-eared copy of a book that had been lost — provides an in-depth look at the connection between Newton’s interpretation of biblical prophecy and his cosmological views.

Andrew Janiak explains how Newton reconciled the apparent tensions between the Bible and the new view of the world described by physics.

Finally, Sarah Dry describes the curious fate of Newton’s unpublished papers, showing what they mean for our understanding of the man and why they remained hidden for so long.


Stephen Snobelen’s article, “Cosmos and Apocalypse,” begins with a paper in the John Locke collection at the Bodelian: Newton’s hand-drawn timeline of the events in Revelations. Snobelen argues that we’ve read too much of The Enlightenment back into Newton.


In particular, the concept of the universe as a pure clockwork that forever operates according to mechanical laws comes from Laplace, not Newton, says Snobelen. He refers to David Kubrin’s 1967 paper “Newton and the Cyclical Cosmos“; it is not open access. (Sign up for free with Jstor and you get constrained access to its many riches.) Kubrin’s paper is a great piece of work. He makes the case — convincingly to an amateur like me — that Newton and many of his cohorts feared that a perfectly clockwork universe that did not need Divine intervention to operate would be seen as also not needing God to start up. Newton instead thought that without God’s intervention, the universe would wind down. He hypothesized that comets — newly discovered — were God’s way of refreshing the Universe.


The second half of the Kubrin article is about the extent to which Newton’s late cosmogeny was shaped by his Biblical commitments. Most of Snobelen’s article is about a discovery in 2004 of a new document that confirms this, and adds to it that God’s intervention heads the universe in a particular direction:

In sum, Newton’s universe winds down, but God also renews it and ensures that it is going somewhere. The analogy of the clockwork universe so often applied to Newton in popular science publications, some of them even written by scientists and scholars, turns out to be wholly unfitting for his biblically informed cosmology.

Snobelen attributes this to Newton’s recognition that the universe consists of forces all acting on one another at the same time:

Newton realized that universal gravity signaled the end of Kepler’s stable orbits along perfect ellipses. These regular geometric forms might work in theory and in a two-body system, but not in the real cosmos where many more bodies are involved.

To maintain the order represented by perfect ellipses required nudges and corrections that only a Deity could accomplish.


Snobelen points out that the idea of the universe as a clockwork was more Leibniz’s idea than Newton’s. Newton rejected it. Leibniz got God into the universe through a far odder idea than as the Pitcher of Comets: souls (“monads”) experience inhabiting a shared space in which causality obtains only because God coordinatis a string of experiences in perfect sync across all the monads.


“Newton’s so-called clockwork universe is hardly timeless, regular, and machine-like,” writes Snobelen. “[I]nstead, it acts more like an organism that is subject to ongoing growth, decay, and renewal.” I’m not sold on the “organism” metaphor based on Snobelen’s evidence, but that tiny point aside, this is a fascinating article.

Tweet
Follow me

Categories: future, science Tagged with: future • newton • prediction Date: August 18th, 2015 dw

1 Comment »

April 14, 2015

[shorenstein] Managing digital disruption in the newsroom

David Skok [twitter:dskok] is giving a Shorenstein Center lunchtime talk on managing digital disruption in the newsroom. He was the digital advisor to the editor of the Boston Globe. Today he was announced as the new managing editor of digital at the Globe. [Congrats!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

As a Nieman fellow David audited a class at the Harvard Business School taught by Clay Christensen, of “creative destruction” fame. This gave him the sense that whether or not newspapers will survive, journalism will. Companies can be disrupted, but for journalism it means that for every legacy publisher that’s disrupted, there are new entrants that enter at the low end and move up market. E.g., Toyota started off at the low end and ended up making Lexuses. David wrote an article with Christensen [this one?] that said that you may start with aggregation and cute kittens, but as you move up market you need higher quality journalism that brings in higher-value advertising. “So I came out of the project doubly motivated as a journalist,” but also wanting to hold off the narrative that there is an inevitability to the demise of newspapers.


He helped started GlobalNews.ca and got recruited for the Globe. There he held to the RPP model: the Resources, Process, and Priorities you put in place to help frame an organizational culture. It’s important for legacy publishers to see that it isn’t just tech that’s bringing down newspapers; the culture and foundational structure of those organizations are also to blame.

Priorities:
If you take away the Internet, a traditional news organization is a print factory line. The Internet tasks were typically taken up by the equivalent groups with in the org. Ultimately, the publisher’s job is how to generate profit, so s/he picks the paths that lead most directly to short-term returns. But that means user experience gets shuffled down, as does the ability of the creators to do “frictionless journalism.” On the Internet, I can write the best lead but if you can’t read it on your phone in 0.1 seconds, it doesn’t exist. The human experience has to be the most important thing. The consumer is the most important person in this whole transaction. How are we making sure that person is pleased?


In the past 18 months David has done a restructuring of the Globe online. He’s been the general mgr of Boston.com. Every Monday he meets with all the group leads, including the sales team (which he does not manage for ethical journalism reasons). This lets them set priorities not at the publisher level where they are driven by profit, but by user and producer experience. The conceit is that if they produce good user and producer experiences, the journalism will be better, and that will ultimately drive more revenue in advertising and subscriptions.


The Globe had a free site (Boston.com) and a paywall site (bostonglobe.com). This was set up before his time. Boston.com relative to its size as a website business has a remarkable amount of revenue via advertising. BostonGlobe.com is a really healthy digital subscription business. It has more subscriptions in North America outside of the NYT and WSJ. These are separate businesses that had been smushed together. So David split them up.

Processes:

They’ve done a lot to change their newsroom processors. Engineers are now in the newsroom. They use agile processes. The newsroom is moving toward an 18-24 hour cycle as opposed to the print cycle.


We do three types of journalism on our sites:


1. Digital first — the “bloggy stuff.” How do we add something new to those conversations that provides the Globe’s unique perspective? We don’t want to be writing about things simply because everyone else is. We want to bring something new to it. We have three digital first writers.


2. The news of the day. We do a good job with this, as demonstrated during the Marathon bombing.


3. Enterprise stuff — long investigations, etc. Those stories get incredible engagement. “It’s heartening.” They’re experimenting with release schedules: how do you maximize the exposure of a piece?

Resources:
In terms of resources: We’re looking at our content management system (CMS). Ezra Klein went to Vox in part because of their CMS. You need a CMS that gives reporters what they need and want. We also need better realtime analytics.


Priorities, Processes + Resources = organizational culture.

Q&A
Q: You’re optimistic…?


A: We’re now entering the third generation of journalism on line. First: [missed it]. Second: SEO. Third: the social phase, the network effect. How are we engaging our readers so that they feel responsible to help us succeed? We’re not in the business of selling impressions [=page views, etc.] but experiences. E.g., we have a bracket competition (“Munch Madness“) for restaurant reviews. We tell advertisers that you’re getting not just views but experiences.


Q: [alex jones] And these revenues are enough to enable the Globe to continue…?


A: It would be foolish of me to say yes, but …


Q: [alex jones] How does the Globe attract an audience that’s excited but civil?


A: Part of it is thinking about new ways of doing journalism. E.g., for the Tsarnaev trial, we created cards that appear on every page that give you a synopsis of the day’s news and all the witnesses and evidence online. We made those cards available to any publisher who wanted them. They’re embeddable. We reached out to every publisher in New England that can’t cover it in the depth that the Globe can” and offered it to them for free. “We didn’t get as much uptake as we’d like,” perhaps because the competitive juices are still flowing.


Then there are the comments. When news orgs first put comments on their site, they thought about them as digital letters to the editor. Comments serve another purpose: they are a product and platform in and of themselves where your community can talk about your product. They’re not really tied to the article. Some comments “make me weep because they’re so beautiful.”


Q: As journalists are being asked to do much more, what do you think about the pay scale declining?


A: I can’t speak for the industry. The Globe pays competitively. We’re creating jobs now. And there are so many more outlets out there that didn’t exist five years ago. Journalists today aren’t just writers. They’re sw engineers, designers, etc.


I’m increasingly concerned about the lack of women engineers entering the field. Newspapers have as much responsibility as any other industry to address this issue.


Q: How to monetize aggregators?


A: If we were to try to go to every org that aggregates us, it’d be a fulltime job. We released a story online on a Feb. afternoon about Jeb Bush at Andover. [This one?] By Friday night, it was all over. I don’t view it as a threat. We have a meter. My job is to make sure that our reporting is good enough that you’ll use your credit card and sign up. I’m in awe in the number of people who sign up every day. We have churn issues as does everyone, but the meter business has been a success.


Q: [me] As you redo your CMS, have you thought about putting in an API? If so, would you consider opening it to the public?


A: When I’ve opened up API sets, there has been minimal takeup.


Q: What other newspapers are doing a good job addressing digital issues? And does the ownership structure matter?


A: The Washington Post, and they have a very similar ownership structure as the Globe.


Q: [alex] What’s Bezo’s effect on the WaPo?


A: Having the Post appear on every Kindle is something we’d all like for ourselves.


Q: Release schedule?


A: Our newsroom’s phenomenal editors are recognizing and believing that we are not a platform-specific business. We find only one in four of our print subscribers logged on to the web site with any frequency. We have two different audiences.We’ve had no evidence that releasing stories earlier on digital cannibalizes our print business. I love print. But when I get the Sunday edition, I feel guilty if I recycle it before I’ve read it all. So why not give people the opportunity to read it when they want? If it’s ready on a Wed., let them read it on Wed. Different platforms have different reader habits.

Q: What’s native to the print version?

A: Some of the enterprise reporting perhaps. But it’s more obvious in format issues. E.g., the print showed the 30 charges Tsarnaev was charged with. It had an emotional impact that digital did not.


Q: Is your print audience entirely over the age of 50?


A: No. It’s a little older than our overall numbers, but not that much.


Q: What are you doing to reduce the churn rate? What’s worked on getting print and digital folks to understand each other?


A: I’m a firm believer in data. We’re not pushing for digital change because we want to but because data backs up our claims. About frictionlessness: It’s so easy to buy goods. Uber. Even buying a necklace. We’re working with a backend database that is complex. We have to tie that into our digital product. The front end complexities on how users can pay come from the complexity of the back end.


Q: [nick sinai] I appreciate your comments about bringing designers, developers, UX into the newsroom. That’s what we’re trying to do in the govt. for digital services. How about data journalism.


A: Data journalism lets you tell stories you didn’t know where there. My one issue: We’ve reached a barrier: we’re reliant on what datasets are available.


Q: How many reporters work for print, Boston.com, and BostonGlobe.com


A: 250 journalists or so work for the Globe and they all work for all platforms.


Q: Are different devices attracting different stories? E.g., a long enterprise story may do better on particular devices. Where is contradiction, nuance, subtlety in this environment? How much is constrained by the device?


A: Yes, there are form-specific things. But there are also social-specific things. If you’re coming from Reddit, your behavior is different from your behavior coming from Facebook, etc. Each provides its own unique expectation of the reader. We’re trying to figure out how to be smarter in detecting where you’re coming from and what assets we should serve up to you. E.g., if you’re coming from Reddit and are going back to talk about the article, maybe you’re never going to subscribe, but could we provide a FB Like button, etc.?


Q: Analytics?


A: The most important metric for me is journalistic impact. That’s hard to measure. Sheer number? The three legislators who can change a law? More broadly: At the top of the funnel, it’s how to grow our audience: page views, shares, unique visitors, etc. As you get deeper into the funnel it’s about how much you engage with the site: bounce rate, path, page views per visit,time spent, etc. Third metric: return frequency. If you had a really good experience, did you come back: return visits, subscribers, etc.


[Really informative talk.]

Tweet
Follow me

Categories: future, journalism Tagged with: api • journalism • liveblog Date: April 14th, 2015 dw

2 Comments »

January 7, 2015

Harvard Library adopts LibraryCloud

According to a post by the Harvard Library, LibraryCloud is now officially a part of the Library toolset. It doesn’t even have the word “pilot” next to it. I’m very happy and a little proud about this.

LibraryCloud is two things at once. Internal to Harvard Library, it’s a metadata hub that lets lots of different data inputs be normalized, enriched, and distributed. As those inputs change, you can change LibraryCloud’s workflow process once, and all the apps and services that depend upon those data can continue to work without making any changes. That’s because LibraryCloud makes the data that’s been input available through an API which provides a stable interface to that data. (I am overstating the smoothness here. But that’s the idea.)

To the Harvard community and beyond, LibraryCloud provides open APIs to access tons of metadata gathered by Harvard Library. LibraryCloud already has metadata about 18M items in the Harvard Library collection — one of the great collections — including virtually all the books and other items in the catalog (nearly 13M), a couple of million of images in the VIA collection, and archives at the folder level in Harvard OASIS. New data can be added relatively easily, and because LibraryCloud is workflow based, that data can be updated, normalized and enriched automatically. (Note that we’re talking about metadata here, not the content. That’s a different kettle of copyrighted fish.)

LibraryCloud began as an idea of mine (yes, this is me taking credit for the idea) about 4.5 years ago. With the help of the Harvard Library Innovation Lab, which I co-directed until a few months ago, we invited in local libraries and had a great conversation about what could be done if there were an open API to metadata from multiple libraries. Over time, the Lab built an initial version of LibraryCloud primarily with Harvard data, but with scads of data from non-Harvard sources. (Paul Deschner, take many many bows. Matt Phillips, too.) This version of LibraryCloud — now called lilCloud — is still available and is still awesome.

With the help of the Library Lab, a Harvard internal grant-giving group, we began a new version based on a workflow engine and hosted in the Amazon cloud. (Jeffrey Licht, Michael Vandermillen, Randy Stern, Paul Deschner, Tracey Robinson, Robin Wendler, Scott Wicks, Jim Borron, Mary Lee Kennedy, and many more, take bows as well. And we couldn’t have done it without you, Arcardia Foundation!) (Note that I suffer from Never Gets a List Right Syndrome, so if I left you out, blame my brain and let me know. Don’t be shy. I’m ashamed already.)

The Harvard version of LibraryCloud is a one-library implementation, although that one library comprises 73 libraries. Thus the LibraryCloud Harvard has adopted is a good distance from the initial vision of a single API for accessing multiple libraries. But it’s a big first step. It’s open source code [documentation]. Who knows?

I think it’s impressive that Harvard Library has taken this step toward adopting a platform architecture, and it’s cool beyond cool that this architecture is further opening up Harvard Library’s metadata riches to any developer or site that wants to use it. (This also would not have happened without Harvard Library’s enlightened Open Metadata policy.)

Tweet
Follow me

Categories: future, libraries Tagged with: library • librarycloud • platforms Date: January 7th, 2015 dw

1 Comment »

December 24, 2014

Fame. Web Fame. Mass Web Fame.

A weird thing happened yesterday. First I got a call from a Swedish journalist writing about a Danish kid who has become famous on the Net for nothing in particular and is now weighing his options as a possible recording star. Since I’ve written about Web fame (in Small Pieces Loosely Joined, in 2002) and talked about it (at the keynote of the first ROFLcon conference in 2008), he gave me a talk and we had a fun conversation.

That conversation prompted me to write a post about how Web fame has changed over the past few years. I was mostly through a first draft when I got a call from a journalist at a well-known US newspaper who is doing a story about Web fame, and wanted to talk with me about it. Huh?

Keep in mind that I hadn’t yet posted about the topic. He got to me totally independently of the Swedish journalist. And it’s not like I spend my mornings talking to the press. It’s just a completely weird coincidence.

Anyway, afterwards I posted what I had written. It’s at Medium. Here’s the beginning:

It’s a great time to be famous, at least if you’re interested in innovating new types of fame. If you’re instead looking for old-fashioned fame, you’re out of luck. We’re in a third epoch of fame, and this one is messier than any of the others. (Sure, that’s an oversimplification, but what isn’t?)

Before the Web there was Mass Fame, the fame bestowed upon lucky (?) individuals by the mass media. The famous were not like you and me. They were glamorous, had an aura, were smiled upon by the gods.

Fame back then was something that was done to the audience. We could accept or reject those thrust upon us by the the mass media, but since fame was defined as mass awareness of someone, the mass media were ultimately in control.

With the dawn of the Web there was Internet Fame. We made people famous…[more]

(Amanda Palmer, whom I use as a positive example of the new possibilities, facebooked the post, which makes me one degree from famous!)

Tweet
Follow me

Categories: future Tagged with: culture • fame • hollywood Date: December 24th, 2014 dw

Be the first to comment »

December 14, 2014

Jeff Jarvis on journalism as a service

My wife and I had breakfast with Jeff Jarvis on Thursday, so I took the opportunity to do a quick podcast with him about his new book Geeks Bearing Gifts: Imagining New Futures for News.

I like the book a lot. It proposes that we understand journalism as a provider of services rather than of content. Jeff then dissolves journalism into its component parts and asks us to imagine how they could be envisioned as sustainable services designed to help readers (or viewers) accomplish their goals. It’s more a brainstorming session (as Jeff confirms in the podcast) than a “10 steps to save journalism” tract, and some of the possibilities seem more plausible — and more journalistic — than others, but that’s the point.

If I were teaching a course on the future of journalism, or if I were convening my newspaper’s staff to think about the future of our newspaper, I’d have them read Geeks Bearing Gifts if only to blow up some calcified assumptions.

Tweet
Follow me

Categories: future, journalism Tagged with: journalism • social media Date: December 14th, 2014 dw

1 Comment »

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999” immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    Tweet
    Follow me

    Categories: future, net neutrality, open access Tagged with: future • interoperability • open • platforms • protocols • web Date: November 26th, 2014 dw

    1 Comment »

  • November 21, 2014

    APIs are magic

    (This is cross-posted at Medium.)

    Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

    “Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

    Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

    First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

    Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

    Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

    Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

    Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

    In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

    In shorter shortnesss: API’s embody the very best of the Web.

    Tweet
    Follow me

    Categories: free culture, future Tagged with: apis • generosity • platforms • technology Date: November 21st, 2014 dw

    Be the first to comment »

    October 13, 2014

    Library as starting point

    A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

    The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

    The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

    The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

    I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

    Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

    Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

    But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

    While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

    I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

    If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

    This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

    It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

    Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

    Tweet
    Follow me

    Categories: future, libraries, too big to know Tagged with: 2b2k • libraries • platforms Date: October 13th, 2014 dw

    Be the first to comment »

    « Previous Page | Next Page »


    Creative Commons License
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
    TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

    Joho the Blog uses WordPress blogging software.
    Thank you, WordPress!