December 31, 2010
Happy new year, libraries!
May 2011 be the best year for libraries in a couple of millennia!
So much is going on that it could be, you know. (And how often do you get to say that?) S
December 31, 2010
May 2011 be the best year for libraries in a couple of millennia!
So much is going on that it could be, you know. (And how often do you get to say that?) S
December 30, 2010
Pew Internet reports that 65% of American Net users (75% of the people they contacted) have paid for online, digital content. Ever. And there’s no category of goods in which more than one third of the respondents have ever paid for content.
The content could include articles, music, software, or anything else in digital form. Here are the results for the fifteen different types of content Pew asked about:
33% of internet users have paid for digital music online
33% have paid for software
21% have paid for apps for their cell phones or tablet computers
19% have paid for digital games
18% have paid for digital newspaper, magazine, or journal articles or reports
16% have paid for videos, movies, or TV shows
15% have paid for ringtones
12% have paid for digital photos
11% have paid for members-only premium content from a website that has other free material on it
10% have paid for e-books
7% have paid for podcasts
5% have paid for tools or materials to use in video or computer games
5% have paid for “cheats or codes” to help them in video games
5% have paid to access particular websites such as online dating sites or services
2% have paid for adult content
The first three are way lower than I would have expected. That 15% have paid for ringtones I find bewildering and just a little depressing. That 2% report having paid for “adult content” I take as meaning 2% actually responded, “Yeah, I pay for porn. You gotta problem with that?”
Overall, there are a number of different conclusions we could draw:
1. The survey was flawed. (The survey questions are here [pdf]). But Pew is a reputable group, and not in service of some other group with an agenda.
2. There is such a wealth of goodness on the Net that in no single category do a majority of people have to use money to get what they want.
3. This a sign of disease: So few people are paying for anything that entire categories of goods-provisioning are going to die, taking the abundances with them.
4. This is a sign of health: New business models based on minority participation are and will emerge that will keep the categories alive, and, indeed, flourishing.
5. Most of what’s available on the Net sucks so much that we won’t pay for it.
6. We are just so over paying for things, dude.
FWIW, I find I’m willing to pay for more content these days, in part out of a sense of responsibility, in part because the payment mechanisms have gotten easier, and always if I can sense the human behind the transaction. (This is a self-report, not a principled stand.)
December 29, 2010
I just heard that Ronnie Simonsen died.
I knew him, a little, because he was one of the campers at Camp Jabberwocky [more posts] and Zero Mountain Farm. The loving obituary in the Boston Globe captures much of what was remarkable about Ronnie, but I knew him inextricably embedded within his summer community. Like some ideal post-racist world, in this community, people do not see disabilities. I cannot think of him apart from their loving and fully mutual embrace.
Every year the campers make a movie, an exercise in play, joy, and friendship. Here’s the Return of the Muskrats, starring Ronnie. I — we — will miss him.
December 28, 2010
Alex Wright has an excellent article in the New York Times today about the great work being done by citizen scientists. (Alex follows up in his blog with some more worthy citizen science efforts.)
Alex, who I met a few years ago at a conference because we had written books on similar topics — his excellent Glut and my Everything Is Miscellaneous — quotes me a couple of times in the article. The first time, I say that the people who are gathering data and classifying images “are not doing the work of scientists.” Some in the comments have understandably taken issue with that characterization. It’s something I deal with at some length in Too Big to Know. Because of the curtness of the comment, it could easily be taken as dismissive, which was not my intent; these volunteers are making a real contribution, as Alex’s article documents. But, in many of the projects Alex discusses (and that I discuss in my manuscript), the volunteers are doing work for which they need no scientific training. They are doing the work of science — gathering data certainly counts — but not the work of scientists. But that’s what makes it such an exciting time: You don’t need a degree or even training beyond the instructions on a Web page, and you can be part of a collective effort that advances science. (Commenter kc I think makes a good argument against my position on this.)
FWIW, the origins of my participation in the article were a discussion with Alex about why in this age of the amateur it’s so hard to find the sort of serious leap in scientific thinking coming from amateurs. Amateurs drove science more in the 19th century than now. Of course, that’s not an apple to apples comparison because of the professionalization of science in the 20th century. Also, so much of basic science now requires access to equipment far too expensive for amateurs. (Although that’s scarily not the case for gene sequencers.)
I’m working on a talk that asks why our greatest institutions have trembled, if not shattered, before the tiny silver hammer of the hyperlink. One tap and, boom, down come newspapers, the recording industry, traditional encyclopedias… Why?
I recognize there are many ways of explaining any complex event. When it comes to understanding the rise of the Net, I tend to pay insufficient attention to economic explanations and to historic explanations based around large players. I’m at doing the opposite of justifying that inattention; I’m copping to it. I tend instead to look first at the Net as a communications medium, and see the changes in light of how what moves onto the Net takes on the properties of the Internet’s sort of network: loose, huge, center-less, without shape, etc.
But, then you have to ask why we flocked to that sort of medium. Why did it seem so inviting? Again, there are multiple explanations, and we need them all. But, perhaps because of some undiagnosable quirk, I tend to understand this in terms of our mental model of who we are and how we live together. My explanatory model hits rock bottom (possibly in both senses) when I see the new network model as more closely fitting what (I believe) we’ve known all along: we are more social than the old model thought, the world is more interesting than the old model thought, we are more fallible and confused than the old model wanted us to believe.
(Now that I think of it, that’s pretty much what my book Small Pieces Loosely Joined was about. So, eight years later, to my surprise, I still basically agree with myself!)
My preference for understanding-based explanations undoubtedly reflects my own personality and unexplored beliefs. I don’t believe there is one bedrock that is bedrockier than all the others.
December 26, 2010
Dan O’Neill on a mailing list writes “The last day that you will be able to get your roll of Kodachrome developed
will be four days from now, December 30, 2010.” He also posted these links:
Trip to India documented on the last roll of Kodachrome. [Me: You’ve got to see Steve McCurry‘s India photos, and not just the last 36. He did the famous Afghan girl cover for National Geographic, for example.]
Last roll of kodochrome manufactured (not the last roll processed)
A Google search for ‘famous kodachrome pictures‘
December 25, 2010
Eliot Weinberger (no relation) reviews George Bush’s memoir, Decision Points, in the London Review of Books, reading it through Foucault’s eyes.
The first half of the review picks up on Foucaultian themes of authorial identity and authenticity. Quite amusing. The second is a recitation of the ways in which Bush sucked that the memoir forgets to mention. I’m only recommending the first half.
December 23, 2010
December 21, 2010
Jim Lucchese, CEO of Echo Nest, is giving a talk on the future of music, which he says is in the hands of app developers.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people. |
Echo Nest analyzes music tracks (16M so far), looking at many, many parameters. It makes that information available to developers of apps.
MTV uses Echo Nest to figure out who is listening to what, how the audio sounds, and what they’re saying about it on the Web, in order to build a personalized station. More interactive, more web-connected, more personalized, and more engaging, he says. Shifts in how we interact with and experience music are occurring every day. “Music apps are thriving” he says, referring to IOS (iPhone, iPad). The bad news is that most of the thousands of developers reshaping music are locked out of the business. They have to navigate all of the rights issues, and get access to the players. Echo Nest has a community of 6,000 developers, but many of the apps are sitting on the shelf because they can’t get access.
The aim of Echo Nest is to build a machine learning system that understands music, but does it at Web scale. It analyzes music and finds the pitch, tempo, etc. Pandora does this by hand, and has analyzed about 800,000 tracks; it doesn’t scale to 10M tracks. Echo Nest combines this with cultural understanding, which it gathers by crawling the Web. Out of this comes “a ton of data”: similar artists, how popular, tag clouds, hotness, bios, song structure, “fanalytics” (demographics of who is listening, psychographics, etc.) They make this info available to developers, who have made 120 apps, including visualizers, targeted marketing apps, etc.
Many were built during music hack days (weekend coding fests). E.g., more granular control over a Pandora-like app. Or, provide detailed info about artists and tracks. Or, Six Degrees of Black Sabbath: find connections between any two artists. Or, a social trivia app (name the tune, identify the fake band, etc.). Or, turn any tune into a swing tune using Echo Nest’s audio manipulation tools. Or, Audio Kicker: location-aware social music discovery act (uses tastes of a group in the same room).
But, there’s an industry chokepoint. The transactions costs are too high for dealing with a lot of developers. So, Echo Nest is working on open content API’s. If the artist is comfortable with more open models, Echo Nest makes the content available to developers. E.g., the DMCA allows streaming within some limits, e.g., no more than two tracks per album per hour. IF you comply, you can pay a compulsory license and not have to first negotiate the rights. Nest Echo lets developers access DMCA streaming of 10M tracks (because Echo Nest has done a deal — Seven Digital in the UK — with a license to those 10M for DMCA streaming). This approach means we don’t have to wait for copyright reform, it lowers the tansaction costs, and provides a filtering mechanism for content owners.
Q: The CEO of Pandora says that Pandora’s survived because humans do the music analysis.
A: It comes down to the quality of the results. We’re powering personalized radio for MTV, Mog, Thumbplay, Spottify, and for an enormous catalog of tracks. There are humans in our system as well: we’re aggregating what people say on the Web. Pandora has problems. E.g., if they want a Klezmer channel, they need about 5,000 tracks, and they can’t afford to put an army of Klezmer musicians to work finding and analyzing tracks. There are also problems with purely machine analysis: it can be hard to tell low-fi punk vs. country, Christian rock vs. heavy metal.
Q: Is your adio analysis violating copyright?
A: We don’t sell directly. As for copyright, there are a couple of cases. Gracenote (nee CDDB) uses a fingerprint to identify tracks. There’s been no litigation around whether what they or we are doing are derivative works. Our agreement with the holders is that we’re deriving facts that are not copyrightable.
Q: Among your developers, which countries are represented?
A: We just did a survey, but we made the mistake of letting the enter a free text answer to “Where do you live?” So, I’ll get back to you in a year. But there have been music hack days in Europe, Sao Paolo, maybe one coming in India…
Q: What’s the backend?
A: For audio analysis, we send out a lightweight binary that will analyze an audio track in about 2 seconds. We also offer that as a web service. We make the analysis data available for about 16M tracks. On the cultural analysis side, it’s highly customized, uses some open source (SOLR, Lucene), web crawlers.
Q: Business model?
A: We’re a data analysis company. Open API is for noncomercial use. If you’re a ommercial developer, we’ll charge a monthly fee and take a piece of your app’s revenues. If you’re MTV, you’re willing to pay a great license fee and don’t want to share as much with us. But if you’re developing, say, a jogging music stationthat matches the beat to your jogging tempo, we charge much less.
Q: Scholarly interest in analyzing your data?
A: Yochai Benkler was interested in the activity data, especially around artists who are giving away their music: we have data on playcount and how people are trending.
Q: Apps do well on the IOS, but is it just a few apps?
A: There was more churn than we expected. We looked at the top 100 music apps per month for a year, categorized them, and look at the number of new names. Streaming apps had 34 different apps in the top 100 in a year. No consolidation yet. (We don’t have access to the long tail of apps.)
Q: What will happen to copyrighted music?
A: Cloud based access is the answer to peer-to-peer sharing. If Spottify etc. offer a better experience than going through a file sharing network, that’s what people will do. But that will change the model: A user’s interaction with a track on Pandora is worth much less to an artist than the user buying a CD.
Q: Cost?
A: The apps are often free, but it costs maybe $10/month to get access to the music. The digital music market was about $4.5B last year. RPU in England is about $55/yr. If that goes to $120/yr, that’ll be a much bigger music. But maybe it won’t be $10/month, especially if you do a deal for a subset of tracks. Or an ISP opt-out plan for $5/month; the opt-out wold make the penetration rates much higher. Too early to tell. Most of the services are just beginning. Spotify, though, has grown to a million subscribes in Europe in over a year.
Q: Access from car?
A: We’re working with some companies. But, if you have a mobile phone and a car with audio in, you’re there. OTOH, the biggest music subscription company in the US is probably XM Radio.
Q: Selling your service to advertisers?
A: Record labels buy data from us to help them understand their market. One company is using our music data as a way to figure out how to target consumers for non-music products.
A: We are matchmakers between developers and large brands. The brands want apps built.
Q: How big is a catalog of 10M tracks?
A: 10M x 3.5mb ? Warner Music has to update hundreds of repositories and catalogs every week. There ought to be one centralized catalog. It’ll happen someday. Every time so far it’s been muli-year industry efforts among players who don’t want to standardize on a competitor’s standard. We’re very interested in opening up music metadata. We think there’s a commons approach. Problem: 50 ways to spell Guns ‘n’ Roses [sp]. We’re a text analysis company, so we do that. Every collection has its own ID sets. We released an open service called Rosetta Stone that maps among them. A free service. We’ve released an open source audio fingerprinter and do lookups against our database of tracks for free; if you’re compiling additional fingerprints you have to share them (and we share them, too). (We don’t download the tracks when we analyze them.)
There are many ways to boil down today’s upcoming FCC rejection of Net neutrality (which they did in the guise of supporting Net neutrality). Here’s one:
The end of Net neutrality means that those who provide access to the Internet — to our Internet, for it is ours, not theirs — have every economic incentive to keep access scarce. By not providing enough bandwidth, they can claim justification for charging users per bit (or per page, service, download, etc.), and justification for charging Net application/data providers for the right to cut ahead in line.
This is ironic — in the not-funny sense — since the access providers’ stated justification for opposing Net neutrality is because to do otherwise would discourage investment. But, why are they going to invest in providing more bits when they make more money by throttling access? (Competition? Sure, that’d be great. Let’s require them to rent out their lines. Oh, I forgot.) Abundance would turn access provision into a profitable commodity business, which is exactly what users want, and what would stimulate innovation and economic growth.
So, now that Net neutrality is going to be overturned, the access providers will make money by preventing access. Anyone want to bet that the U.S. is now going to climb the charts of average national broadband rates and of lowest average cost? Does anyone think that we haven’t just moved back by decades when we’ll have, say, gigabit access common across the country?
For shame, FCC.
[Later that day] The FCC has clarified some of what it means. For example, they are not going to allow access providers to charge companies for fast lane access. It seems that Commissioners Copps and Mignon nudged the regulations in the right direction. Thank you for that. (Also, see Harold Feld’s take.)