logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

January 6, 2012

Library News

Did I ever mention the really useful site Matt Phillips and Jeff Goldenson at the Library Innovation Lab put up a couple of weeks ago? If you are interested in libraries and tech, Library News is a community-supported news site where you’ll find a steady stream of interesting articles. Or, put differently, it’s the Hacker News code redirected at library tech articles.

I have it open all day. Try it. Contribute to it. Go library hacker nuts!

Tweet
Follow me

Categories: libraries, too big to know Tagged with: 2b2k • libraries Date: January 6th, 2012 dw

Be the first to comment »

January 4, 2012

Starting on the platform for the Digital Public Library of America

For the past 1.5 years or so, I’ve been co-director, along with Kim Dulin, of the Harvard Library Innovation Lab. Among the projects we’ve been working on is LibraryCloud, a multi library metadata server. (You can see it at work, running underneath ShelfLife, another of our projects, here.) Today the Digital Public Library of America announced that initial (and interim) development work on the DPLA platform will be done by the LibraryCloud team — Paul Deschner and Matthew Phillips — plus our Berkman friends, Daniel Collis-Puro and Sebastian Diaz. I’m the team leader, or whatever you call the person who knows the least. We’ll do this as openly as possible, relying upon the community to help at every phase, but this will be our core work during the first phase of the platform’s development, leading up to an April 26 DPLA Steering Committee meeting.

The DPLA platform will enable developers to write applications using the metadata (primarily about content hosted elsewhere) the DPLA will be aggregating.

We’re excited. Thrilled, actually.

Tweet
Follow me

Categories: dpla, libraries Tagged with: dpla Date: January 4th, 2012 dw

5 Comments »

December 21, 2011

CBC Spark on ShelfLife and LibraryCloud

The CBC show Spark a couple of days ago ran an 8 minute piece about the two biggest projects coming out of the Harvard Library Innovation Lab, ShelfLife and LibraryCloud. It does a great job cutting together an interview of me with an illuminating narrative from Nora Young. (I co-direct the Lab, along with Kim Dulin, although credit for these apps goes to our team: Annie Jo Cain, Paul Deschner, Jeff Goldenson, Matt Phillips, and Andy Silva.)

Spark also has posted the full, uncut interview and a good blog post about it.

Tweet
Follow me

Categories: libraries, podcast Tagged with: library • librarycloud • podcast • shelflife • spark Date: December 21st, 2011 dw

1 Comment »

December 9, 2011

CBC interview with me about library stuff

The CBC has posted the full, unedited interview with me (15 mins) that Nora Young did last week. We talk about the Harvard Library Lab’s two big projects, ShelfLife and LibraryCloud. (At the end, we talk a little about Too Big To Know.) The edited interview will be on the Spark program.

Tweet
Follow me

Categories: everythingIsMiscellaneous, libraries Tagged with: everythingIsMiscellaneous • libraries • shelflife Date: December 9th, 2011 dw

1 Comment »

November 29, 2011

[2b2k] Curation without trucks

If users of a physical library could see the thousands of ghost trucks containing all the works that the library didn’t buy backing away from the library’s loading dock, the idea of a library would seem much less plausible. Rather than seeming like a treasure trove, it would look like a relatively arbitrary reduction.

It’s not that users or librarians think there is some perfect set (although it wasn’t so long ago that picking a shelf’s worth of The Great Books seemed not only possible but laudable). Everyone is pragmatic about this. Users understand that libraries make decisions based on a mix of supporting popular tastes and educating to preferred tastes: The Iliad is going to survive being culled even though it has far fewer annual check-outs than The Girl with the Dragon Tattoo. Curating is a practical art and libraries are good at it. But curating into a single collection that happens to fit within a library-sized building increasingly looks like a response to the weaknesses of material goods, rather than as an appropriate appreciation of their cultural value. Curation has always meant identifying the exceptions, but with the new assumption of abundance, curators look for exceptions to be excluded, rather than to be included. In the Age of the Net, we’re coming to believe that just about everything deserves to be in the library for one reason or another.

It seems to me there are two challenges here. The first is redeploying the skills of curators within a hyper-abundant world that supports multiple curations without cullings. That seems to me eminently possible and valuable. The second is cultivating tastes when there are so many more paths of least cognitive and aesthetic resistance. And that is a far more difficult, even implausible, challenge.

That is, our technology makes it easy to have multiple curations equally available, but our culture wants (has wanted?) some particular curations to have priority. Unless trucks are physically removing the works outside the preferred collection, how we are going to enforce our cultural preferences?

The easy solution is to give up on the attempt. The Old White Man’s canon is dead, and good riddance. But you don’t have to love old white men to believe that culture requires education — despite what Nikolas Sarkozy believes, we don’t “naturally” love complex works of art without knowing anything about their history or context — and that education requires taking some harder paths, rather than always preferring the easier, more familiar roads. I won’t argue further for this because it’s a long discussion and I have nothing to say that you haven’t already thought. So, for the moment take it as an hypothesis.

This I think makes clear what one of the roles of the DPLA (Digital Public Library of America) should be.

Ed Summers has warned that the DPLA needs to be different from the Web. If it is simply an index of what is already available, then it has not done its job. It seems to me that even if it curates a collection of available materials it has not done its job. It is not enough to curate. It is not even enough to curate in a webby way that enables users to participate in the process. Rather, it needs to be (imo) a loosely curated assemblage that is rich in helping us not only to find what is of value, but to appreciate the value of what we find. It can do that in the traditional ways — including items in the collection, including them in special lists, providing elucidations and appreciations of the items — as well as in non-traditional, crowd-sourced, hyperlinked ways. The DPLA needs to be rich and ever richer in such tools. The curated works should become ever more embedded into a network of knowledge and appreciation.

So, yes, part of the DPLA should be that it is a huge curated collection of collections. But curation now only has reliable value if it can bring us to appreciate why those curatorial decisions were made. Otherwise, it can seem as if we’re simply looking at that which the trucks left behind.

Tweet
Follow me

Categories: everythingIsMiscellaneous, libraries, too big to know Tagged with: 2b3k • curation • dpla • libraries Date: November 29th, 2011 dw

4 Comments »

November 22, 2011

Physical libraries in a digital world

I’m at the final meeting of a Harvard course on the future of libraries, led by John Palfrey and Jeffrey Schnapp. They have three guests in to talk about physical library space.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

David Lamberth lays out an idea as a provocation. He begins by pointing out that until the beginning of the 20th century, a library was not a place but only a collection of books. He gives a quick history of Harvard Library. After the library burned down in 1764, the libraries lived in fear of fire, until electric lights came in. The replacement library (Gore Hall) was built out of stone because brick structures need wood on the inside. But stone structures are dank, and many books had to be re-bound every 30 years. Once it filled up, 25-30 of Harvard libraries derived from the search for fireproof buildings, which helps explain the large distribution of libraries across campus. They also developed more than 40 different classification systems. At the beginning of the 20th C, Harvard’s collection was just over one million. Now it adds up to around 18M. [David’s presentation was not choppy, the way this paraphrase is.]

In the 1980s, there was continuing debate about what to do about the need for space. The big issue was open or closed stacks. The faculty wanted the books on site so they could be browsed. But stack space is expensive and you tend to outgrow it faster than you think. So, it was decided not to build any more stack space. There already was an offsite repository (New England Book Depository), but it was decided to build a high density storage facility to remove the non-active parts of the collection to a cheaper, off-site space: The Harvard Depository (HD).

Now more than 40% of the physical collections are at HD. The Faculty of Arts and Sciences started out hostile to the idea, but “soon became converted.” The notion faculty had of browsing the shelves was based on a fantasy: Harvard had never had all the books on a subject on a shelf in a single facility. E.g., search on “Shakespeare” in the Harvard library system: 18,000 hits. Widener Library is where you’d expect to find Shakespeare books. But 8,000 of the volumes aren’t in Widener. Of Widener’s 10K Shakespeare, volumes, 4,500 are in HD. So, 25% of what you meant to browse is there. “Shelf browsing is a waste of time” if you’re trying to do thorough research. It’s a little better in the smaller libraries, but the future is not in shelf browsing. Open and closed stacks isn’t the question any more. “It’s just not possible any longer to do shelf browsing, unless we develop tools for browsing in a non-physical fashion.” E.g., catalog browsers, and ShelfLife (with StackView).

There’s nobody in the stacks any more. “It’s like the zombies have come and cleared people out.” People have new alternatives, and new habits. “But we have real challenges making sure they do as thorough research as possible, and that we leverage our collection.” About 12M of the 18M items are barcoded.

A task force saw that within 40 years, over 70% of the physical collection will be off site. HD was not designed to hold the part of the collection most people want to use. So, what can do that will give us pedagogical and intellectual benefit, and realizes the incredible resource that our collection is?

Let me present one idea, says David. The Library Task Force said emphatically that Harvard’s collection should be seen as one collection. It makes sense intellectually and financially. But that idea is in contention with the 56 physical libraries at Harvard. Also, most of our collection doesn’t circulate. Only some of it is digitally browsable, and some of that won’t change for a long long long time. E.g., our Arabic journals in Widener aren’t indexed, don’t publish cumulative indexes, and are very hard to index. Thus scholars need to be able to pull them off the shelves. Likewise for big collections of manuscripts that haven’t even been sorted yet.

One idea would be to say: Let’s treat physical libraries as one place as well. Think of them as contiguous, even though they’re not. What if bar-coded books stayed in the library you returned to them to? Not shelved by a taxonomy. Random access via the digital, and it tells you where the work is. And build perfect shelves for the works that need to be physically organized. Let’s build perfect Shakespeare shelves. Put them in one building. The other less-used works will be findable, but not browsable. This would require investing in better findability systems, but it would let us get past the arbitrariness of classification systems. Already David will usually go to Amazon to decide if he wants a book rather than take the 5 mins to walk to the library. By focusing on perfect shelves for what is most important to be browsable, resources would be freed up. This might make more space in the physical libraries, so “we could think about what the people in those buildings want to be doing,” so people would come in because there’s more going on. (David notes that this model will not go over well with many of his colleagues.)

53% of library space at Harvard is stack space. The other 47% is split between patron space and space staff. About 20-25% is space staff. Comparatively, Harvard is lower on patron space size than typical. The HD is holding half the collection in 20% of the space. It’s 4x as expensive to store a work on a stack on campus than off.

David responds to a question: The perfect shelves should be dynamic, not permanent. That will better serve the evolution of research. There are independent variables: Classification and shelf location. We certainly need classification, but it may not need to map to shelf locations. Widener has bibliographic lists and shelf lists. Barcodes give us more freedom; we don’t have to constantly return works to fixed locations.

Mike Barker: Students already build their own perfect shelves with carrels.

Q: What’s the case for ownership and retention if we’re only addressing temporal faculty needs?

A lot of the collecting in the first half of the 20 C was driven by faculty requests. Not now. The question of retention and purchase splits on the basis of how uncommon the piece of info is. If it’s being sold by Amazon, I don’t think it really matters if we retain it, because of the number of copies and the archival steps already in place. The more rare the work, the more we should think about purchase and retention. But under a third of the stack space on campus ideal environmental conditions. We shouldn’t put works we buy into those circumstances unless they’re being used.

Q: At the Law Library, we’re trying to spread it out so that not everyone is buying the same stuff. E.g., we buy Peruvian materials because other libraries aren’t. And many law books are not available digitally, so we we buy them … but we only buy one copy.

Yes, you’re making an assessment. In the Divinity library, Mike looked at the duplication rate. It was 53%. That is, 53% of our works are duplicated in other Harvard libraries.

Mike: How much do we spend on classification? To create call numbers? We annually spend about 1.5-2M on it, plus another million shelving it. So, $3M-3.5M total. (Mike warns that this is a “very squishy” number.) We circulate about 700,000 items a years. The total operating budget of the Library is about $152M. (He derived this number by asking catalogers who long it takes to classify an item without one, divided into salary.)

David: Scanning in tables of contents, indexes, etc., lets people find things without having to anticipate what they’re going to be interested in.

Q: Where does serendipity fall in this? What about when you don’t know what you’re looking for?

David: I agree completely. My dissertation depended on a book that no one had checked out since 1910. I found it on the stacks. But it’s not on the shelves now. Suppose I could ask a research librarian to bring me two shelves worth of stuff because I’m beginning to explore some area.

Q: What you’re suggesting won’t work so well for students. How would not having stacks affect students?

David: I’m being provocative but concrete. The status quo is not delivering what we think it does, and it hasn’t for the past three decades.

Q: [jeff goldenson] Public librarians tell us that the recently returned trucks are the most interesting place to go. We don’t really have the ability to see what’s moving in the Harvard system. Yes, there are privacy concerns, but just showing what books have been returned would be great.

Q: [palfrey] How much does the rise of the digital affect this idea? Also, you’ve said that the storage cost of a digital object may be more than that of physical objects. How does that affect this idea?

David: Copyright law is the big If. It’s not going away. But what kind of access do you have to digital objects that you own? That’s a huge variable. I’ve premised much of what I’ve said on the working notion that we will continue to build physical collections. We don’t know how much it will cost to keep a physical object for a long time. And computer scientists all say that digital objects are not durable. My working notion here is that the parts that are really crucial are the metadata pieces, which are more easily re-buildable if you have the physical objects. We’re not going to buy physical objects for all the digital items, so the selection principle goes back to how grey or black the items are. It depends on whether we get past the engineering question about digital durability — which depends a lot on electromagnetism as a storage medium, which may be a flash in the pan. We’re moving incrementally.

Q: [me] If we can identify the high value works that go on perfect shelves, why not just skip the physical shelves and increase the amount of metadata so that people can browse them looking for the sort of info they get from going to the physical shelf?

A: David: Money. We can’t spend too much on the present at the expense of the next century or two. There’s a threshold where you’d say that it’s worth digitizing them to the degree you’d need to replace physical inspection entirely. It’s a considered judgment, which we make, for example, when we decide to digitize exhibitions. You’d want to look at the opportunity costs.

David suggests that maybe the Divinity library (he’s in the Phil Dept.) should remove some stacks to make space for in-stack work and discussion areas. (He stresses that he’s just thinking out loud.)

Matthew Sheehy, who runs HD, says they’re thinking about how to keep books 500 years. They spend $300K/year on electricity to create the right environment. They’ve invested in redundancy. But, the walls of the HD will only last 100 years. [Nov. 25: I may have gotten the following wrong:] He thinks it costs about $1/ year to store a book, not the usual figure of $0.45.

Jeffrey Schnapp: We’re building a library test kitchen. We’re interested in building physical shelves that have digital lives as well.

[Nov. 25: Changed Philosophy school to Divinity, in order to make it correct. Switched the remark about the cost of physical vs. digital in the interest of truth.]

Tweet
Follow me

Categories: everythingIsMiscellaneous, libraries, taxonomy, too big to know Tagged with: 2b2k • everythingIsMiscellaneous • libraries • shelflife Date: November 22nd, 2011 dw

4 Comments »

November 19, 2011

[avignon] Google’s Cultural Institute

Steve Crossan, head of the Cultural Institute in Paris, is demo-ing Google’s super spiffy swirling virtual bookcase. The Cultural Institute was set up in April. It’s a group of engineers. They’re building tools and services for the cultural sector, to help people get to online content in an emotionally engaging way.

One pilot project: Dead Sea Scrolls online, searchable and zoomable. Another the WebGL Bookcase.

Another: Memory of a Nation. In 2012 they’re focusing on bringing together archival content with personal testimony.

They’re also developing a physical space. In a virtual world, what shall one do with a physical space to explore culture? The space will be opening in April-May 2012.

Steve introduces Amit Sood to talk about the Google Art Project. He was working on Android, but spent his 20% time (“on Saturdays and Sundays” :) on a collaborative project with 17 great museums. It launched on Feb 1. It’s trying to give an idea of how to enjoy the museums and art in a different way.

He points out that it does not look like a Google page. He goes to a Brueggel at the Met. He zooms in extremely tight (brushstroke close) and very easily, without obvious latency. The “gigapixel” zoom is crazy good. There’s an info panel with plenty of info, including multi-media. You can also do a street view through the museum. (Not all the paintings are at the gigapixel level.) You can add artworks to your personal collections, and annotate it, including sharing details. (The details can always be zoomed back out.) You can share your collections on any social medium.

Why did Google do the project? It started out of passion, not out of corporate strategy. But after they launched, it got a lot of internal support. The four person team was multicultural. Access to info is critical, he says. He grew up in India, where simply walking into a museum was not a real possibility. He reminds us how lucky we are. That was his personal motivation. Other team members did it in order to create new audiences. How can we reduce the snob factor of museums? Finally, because it’s an immersive experience.

25M people have visited. 100,000 collections. Version 2 is coming.

Q: Will you open archives of unplayed music? And can artists create their own gigapixel images?

A: We’re working with archives.

Tweet
Follow me

Categories: libraries Tagged with: google • libraries • museums Date: November 19th, 2011 dw

1 Comment »

[avignon] [2b2k] Robert Darnton on the history of copyright , open access, the dpla…

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

We begin with a report on a Ministerial meeting yesterday here on culture — a dialogue among the stakeholders on the Internet. [No users included, I believe.] All agreed on the principles proposed at Deauville: It is a multi-stakeholder ecosystem that complies with law. In this morning’s discussion, I was struck by the convergence: we all agree about remunerating copyright holders. [Selection effect. I favor copyright and remunerating rights holders, but not as the supreme or exclusive value.] We agree that there are more legal alternatives. We agree that the law needs to be enforced. No one argued with that. [At what cost?] And we all agree we need international cooperation, especially to fight piracy.

Now Robert Darnton, Harvard Librarian, gives an invited talk about the history of copyright.

Darnton: I am grateful to be here. And especially grateful you did not ask me to talk about the death of the book. The book is not dead. More books are being produced in print and online every year than in the previous year. This year, more than 1 million new books will be produced. China has doubled its production of books in the past ten years. Brazil has a booming book industry. Even old countries like the US find book production is increasing. We should not bemoan the death of the book.

Should we conclude that all is well in the world of books? Certainly not. Listen to the lamentations of authors, publishers, booksellers. They are clearly frightened and confused. The ground is shifting beneath their feet and they don’t know where to stake a claim. The pace of tech is terrifying. What took millennia, then centuries, then decades, now happens all the time. Homesteading in the new info ecology is made difficult by uncertainty about copyright and economics.

Throughout early modern Europe, publishing was dominated by guilds of booksellers and printers. Modern copyright did not exist, but booksellers accumulated privileges, which Condorcet objected to. These privileges (AKA patents) gave them the exclusive rights to reproduce texts, with the support of the state. The monarchy in the 17th century eliminated competitors, especially ones in the provinces, reinforcing the guild, thus gaining control of publishing. But illegal production throve. Avignon was a great center of privacy in the 18th century because it was not French. It was surrounded by police intercepting the illegal books. It took a revolution to break the hegemony of the Parisian guild. For two years after the Bastille, the French press enjoyed liberty. Condorcet and others had argued for the abolition of constraints on the free exchange of ideas. It was a utopian vision that didn’t last long.

Modern copyright began with the 1793 French copyright law that established a new model in Europe. The exclusive right to sell a text was limited to the author for lifetime + 10 years. Meanwhile, the British Statute of Anne in 1710 created copyright. Background: The stationers’ monopoly required booksellers — and all had to be members — to register. The oligarchs of the guild crushed their competitors through monopolies. They were so powerful that they provoked results even within the book trade. Parliament rejected the guild’s attempt to secure the licensing act in 1695. The British celebrate this as the beginning of the end of pre-publication censorship.

The booksellers lobbied for the modern concept of copyright. For new works: 14 years, renewable once. At its origin, copyright law tried to strike a balance between the public good and the private benefit of the copyright owner. According to a liberal view, Parliament got the balance right. But the publishers refused to comply, invoking a general principle inherent in common law: When an author creates work, he acquires an unlimited right to profit from his labor. If he sold it, the publisher owned it in perpetuity. This was Diderot’s position. The same argument occurred in France and England.

In England, the argument culminated in a 1774 Donaldson vs. Beckett that reaffirmed 14 years renewable once. Then we Americans followed in our Constitution and in the first copyright law in 1790 (“An act for the encouragement of learning”, echoing the British 1710 Act): 14 years renewable once.

The debate is still alive. The 1998 copyright extension act in the US was considerably shaped by Jack Valenti and the Hollywood lobby. It extended copyright to life + 70 (or for corporations: life + 95). We are thus putting most literature out of the public domain and into copyright that seems perpetual. Valenti was asked if he favored perpetual copyright and said “No. Copyright should last forever minus one day.”

This history is meant to emphasize the interplay of two elements that go right through the copyright debate: A principle directed toward the public gain vs. self-interest for private gain. It would be wrong-headed and naive to only assert the former. B ut to assert only the latter would be cynical. So, do we have the balance right today?

Consider knowledge and power. We all agree that patents help, but no one would want the knowledge of DNA to be exploited as private property. The privitization of knowledge has become an enclosure movement. Consider academic periodicals. Most knowledge first appears in digitized periodicals. The journal article is the principle outlet for the sciences, law, philosophy, etc. Journal publishers therefore control access to most of the knowledge being created, and they charge a fortune. The price of academic journals rose ten times faster than the rate of inflation in the 1990s. The J of Comparative Neurology is $29,113/year. The Brain costs $23,000. The average list price in chemistry is over $3,000. Most of the research was subsidized by tax payers. It belongs in the public domain. But commercial publishers have fenced off parts of that domain and exploited it. Their profit margins runs as high as 40%. Why aren’t they constrained by the laws of supply and domain? Because they have crowded competitors out, and the demand is not elastic: Research libraries cannot cancel their subscriptions without an uproar from the faculty. Of course, professors and students produced the research and provided it for free to the publishers. Academics are therefore complicit. They advance their prestige by publishing in journals, but they fail to understand the damage they’re doing to the Republic of Letters.

How to reverse this trend? Open access journals. Journals that are subsidized at the production end and are made free to consumers. They get more readers, too, which is not surprising since search engines index them and it’s easy for readers to get to them. Open Access is easy access, and the ease has economic consequences. Doctors, journalists, researchers, housewives, nearly everyone wants information fast and costless. Open Access is the answer. It is a little simple, but it’s the direction we have to take to address this problem at least in academic journals.

But the Forum is thinking about other things. I admire Google for its technical prowess, but also because it demonstrated that free access to info can be profitable. But it ran into problems when it began to digitize books and make them available. It got sued for alleged breach of copyright. It tried to settle by turning it into a gigantic business and sharing the profits with the authors and publishers who sued them. Libraries had provided the books. Now they’d have to buy them back at a price set by Google. Google was fencing off access to knowledge. A federal judge rejected it because, among other points, it threatened to create a monopoly. By controlling access to books, Google occupied a position similar to that of the guilds in London and Paris.

So why not create a library as great as anything imagined by Google, but that would make works available to users free of charge? Harvard held a workshop on Oct. 1 2010 to explore this. Like Condorcet, a utopian fantasy? But it turns out to be eminently reasonable. A steering committee, a secretariat, 6 workgroups were established. A year later we launched the Digital Public Library of America at a conference hosted by the major cultural institutions in DC, and in April in 2013 we’ll have a preliminary version of it.

Let me emphasize two points. 1. The DPLA will serve a wide an varied constituency throughout the US. It will be a force in education, and will provide a stimulus to the economy by putting knowledge to work. 2. It will spread to everyone on the globe. The DPLA’s technical infrastructure is being designed to be interoperable with Europeana, which is aggregating the digital collections of 27 companies. National digital libraries are sprouting up everywhere, even Mongolia. We need to bring them together. Books have never respected boundaries. Within a few decades, we’ll have worldwide access to all the books in the world, and images, recordings, films, etc.

Of course a lot remains to be done. But, the book is dead? Long live the book!

Q: It is patronizing to think that the USA and Europe will set the policy here. India and China will set this policy.

A: We need international collaboration. And we need an infrastructure that is interoperable.

Tweet
Follow me

Categories: copyright, culture, libraries, open access, too big to know Tagged with: 2b2k • avignon • copyright • dpla • history • open access • robert darnton Date: November 19th, 2011 dw

1 Comment »

November 7, 2011

Avi Warshavsky on the future of textbooks

I’ve posted a brief video interview with Avi Warshavsky of the Center for Educational Technology, the leading textbook publisher in Israel. Avi is a thoughtful and innovative software guy who has been experimenting with new ways of structuring textbooks.

Tweet
Follow me

Categories: education, libraries Tagged with: education • podcasts • textbooks Date: November 7th, 2011 dw

Be the first to comment »

November 1, 2011

[2b2k] Interview with Kevin Kelly on What Libraries Want

Dan Jones just posted my Library Lab Podcast conversation with Kevin Kelly, of whom I’m a great admirer.

Tweet
Follow me

Categories: libraries, too big to know Tagged with: 2b2k • libraries • podcasts Date: November 1st, 2011 dw

Be the first to comment »

« Previous Page | Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!