11/19/2011 From the Academic Librarian, a professor at Princeton: On Libraries, Rhetoric, Poetry, History, & Moral Philosophy comes the following somewhat amusing and lengthy article concerning print and ebooks.
The Codex is Dead; Long Live the Codex
Posted on November 15, 2011
ACRLog had a post last week about humanists wanting print books rather than ebooks. Here’s a key passage:
Ebooks seem like sweet low-hanging fruit – they have enhanced searchability, accessibility at any time or place, and reduced storage and preservation costs. What’s not to love? Ebooks seem to make our students very happy. Often they don’t want to read a book cover to cover (although their professors might wish they would), and searching for relevant passages seems to satisfy their needs for many assignments. And journal literature seems exempt from the preference for print – I haven’t heard many complaints about deaccessioning back runs of print journals represented inJSTOR’s collections, for instance.
When thinking of humanities scholars and their books, I don’t see how it matters if most students don’t want to read their books all the way through or want to treat scholarly monographs the way they treat encyclopedias, as collections of information tidbits to pick and choose among. The scholarly monograph in the humanities isn’t designed to be read that way. It’s not a report of research results, but the result of research, and the analyses and arguments develop throughout the book or at least throughout the chapters. And what’s more, scholars don’t just dip into one book at a time to get some useful fact; they immerse themselves in books and frequently move among many different books while working.
The writer notes that the same faculty who demand print books for their work are happy to read novels on their ebook readers while relaxing or traveling. “It’s one thing, they tell us, to read for pleasure on a screen – but it’s quite another to read for understanding, for critique, for engaging in the scholarly conversation. And this isn’t a generational matter – some of the faculty I know who seem most committed to print are younger than forty.” I don’t know why this would surprise any librarians who work in the humanities. It’s easy to forget amidst the technological splendor that the codex is an extremely useful tool. Humanists often work on research projects that involve examining multiple texts and comparing them, sometimes moving from book to book and sometimes from passage to passage within those books. Spreading several books on a desk and flipping back and forth between passages is relatively easy, and much easier than trying to do the same thing on any current ebook reader. Annotating a book with pencil in hand is also faster and easier than doing it on any ebook readers I’ve yet seen. It’s easy enough for me to think of examples from my own work. This summer I was writing a book chapter that was more or less intellectual history. The bulk of the chapter focused on four or five primary texts as well as a handful of secondary sources. I was trying both to analyze specific arguments occurring throughout the primary texts as well as compare the arguments to those in the other primary texts. The easiest way for me to do this was to have the books spread out around me, so that I could quickly put down one and pick up another or flip back and forth between several relevant passages in the same book.
Working with printed books is at the moment the fastest and easiest way to do this, which is probably why the scholars who do this sort of work the most like printed books. Everything else is clunky by comparison, especially ebook readers. This kind of work explains why humanists like ebook readers for casual reading but not for scholarly work. Leaving aside the DRM restrictions that make getting and reading ebooks so irritating at times, the ebook reader technology just isn’t sophisticated enough for widespread humanistic scholarly use yet. When it’s possible to flip instantly among several books and between passages on a device that’s easy on the eyes and allows annotation as quick as a pencil, this might change. Indeed, I was unsurprised by the Ebrary ebook survey that showed “The vast majority of students would choose electronic over print if it were available and if better tools along with fewer restrictions were offered.” To that I would add two caveats: first, better tools with fewer restrictions aren’t being offered, and second, the majority of students aren’t humanities scholars. My library did a large campus survey of faculty and students last year. 92% of humanists viewed print books as “essential.” This will change when the new tools become as adequate and easy to use as the old tools.
Sure, there might be ways around this, assuming one can get all the necessary books in digital format. (For the project I was working on this summer, I used books that were print-only and hard to get because few libraries held them, and they weren’t for sale or I would have purchased them for my own library. So much for PDA-only libraries relying on used-book dealers to meet their retrospective collection development needs.) But assuming I could, what current technology would suffice to replicate the ease of moving among books and passages of books? Maybe having six tablet computers would work. They would have to be devices that displayed PDFs well, too, so that the secondary journal literature could also easily be read. That sort of defeats the purpose of ebooks, because if I had to carry around, much less purchase, a handful of ebook readers the main purpose of having an ebook reader is eliminated.
I think this is an example where breathless ebook prophets are pushing a format that for now remains an inadequate tool for humanistic scholarly research, and I suspect they’re doing so because they never do any of that type of research, so they either don’t know or don’t care about the inadequate tools. Technology that doesn’t make work easier is bad technology, no matter how much some people might like it for their casual reading. When the tools improve, no one will be protesting the demise of the codex. The ideal might be one of those virtual reality gesture-input computers like in Minority Report. All it might take is a computer that could simultaneous project multiple, easily manipulated texts in the space surrounding a scholar, texts that could be read, highlighted, annotated, and flipped through as easily as printed books. Making copying and pasting of quotations easily into whatever passes for a virtual reality word processor would be a boon as well. When that technology is as ubiquitous in academia as printed books, then the problem will be solved and humanists might abandon the codex. And if they don’t, that’s the time to start chastising them for their reactionary views, because it’s not reactionary to resist technology that makes one’s life more difficult.
The immediate future will be considerably more banal, but I can see the trend with both the new Ebrary ebook downloads and the new ebook platform on the new Project Muse beta site. Both allow quick and easy downloading of portions of books into PDF format, and the entire book if you don’t mind it being broken up into sections or chapters. This mimics the availability of scholarly articles through many databases, and everyone admits that even humanist scholars have no problem with electronic articles, just electronic books. That’s because most of them print the articles out and read them on paper, which they will now be able to do with lots of future ebooks. I’d rather have the virtual reality library, but until that happens PDF printouts might be as close to an ebook-only future as most humanists are likely to get. Libraries might stop buying printed books some day. The codex is dead. Scholars will then print out their PDF ebooks to make reading and research easier. Love live the codex.
11/16/2011 From the Chronicle of Higher Education, the author addresses the demise of the printed book in this century.
November 13, 2011
In the 21st-Century University, Let’s Ban Books
James Yang for The Chronicle
By Marc Prensky
Recent news that South Korea plans to digitize its entire elementary- and secondary-school curriculum by 2015, combined with the declining cost of e-readers and Amazon’s announcement earlier this year that it is selling more e-books than print books, prompts an interesting question: Which traditional campus will be the first to go entirely bookless? Not, of course, bookless in the sense of using no book content, but bookless in the sense of allowing no physical books. My guess is that this will make some institution famous.
Already, just about everything that an undergraduate needs to read is available in electronic form. Whatever isn’t there electronically, librarians, students, or professors can easily scan, as many already do.
Some colleges are already heading in this direction by requiring or handing out iPod Touches, iPads, Kindles, or Nooks, often preloaded with textbooks and other curricular materials, or by disallowing paper texts for online courses. But I suggest that it’s time to go much further: to actually ban nonelectronic books on campus. That would be a symbolic step toward a much better way of teaching and learning, in which all materials are fully integrated. It could involve a pledge similar to the one that language students and instructors at Middlebury Language Schools take to speak only the foreign languages in which they are immersed during the study program.
In this bookless college, all reading—which would still, of course, be both required and encouraged—would be done electronically. Any physical books in students’ possession at the beginning of the year would be exchanged for electronic versions, and if a student was later found with a physical book, it would be confiscated (in return for an electronic version). The physical books would be sent to places and institutions that wanted or needed them. Professors would have a limited time in which to convert their personal libraries to all-digital formats, using student helpers who would also record the professors’ marginal notes.
Why, in a world in which choice and personal preference are highly valued, would any college want to create such a mandate? Because it makes a bold statement about the importance of moving education into the future. It is, in a sense, only a step removed from saying, “We no longer accept theses on scrolls, papyrus, or clay tablets. Those artifacts do still exist in the world, but they are not the tools of this institution.” Or: “In this institution we have abandoned the slide rule. Those who find it useful and/or comforting can, of course, use it, but not here.”
Let me be clear that I’m not advocating that we get rid of the good and valuable ideas, thoughts, or words in books—only that we transfer them to (and have students absorb them through) another form. Much of what students need to study is already in the public domain and can easily, in instances where it hasn’t already been done, be converted to electronic form. Most contemporary works exist electronically, as do a huge number of historical books and documents. This would be an incentive to scan more of them. It would also provide an opportunity for academics and others to consider how notions of intellectual-property rights might need to be updated for the digital age.
Of course, pushback is to be expected. I think less of it would come from faculties in the sciences, who feel most deeply the need to connect information more completely and be sure it is up-to-date, than from humanities faculties, who often teach particular physical books (and might tend to be far more attached to them). Such a mandate might not go over well with all students, either, at least at first, because many have been inculcated since birth to appreciate the value of physical books.
But I believe the change would be transformational, in very positive ways, for education. Once the change happened, the college and its professors would be expected to enhance all electronic texts in useful ways. Student materials might contain not just the commentary of the individual professor but of professors all over the world. A student’s Hamlet might contain not just the notes that a student would find in a print edition but collective notes from actors, directors, scholars, and other contributors. The college’s version of Hamlet might be linked to whatever notes Laurence Olivier or Harold Bloom had written in the margins of their own copies. It might be linked to scenes and versions already on YouTube, or to open courseware from institutions around the world.
Selecting and curating such enhancements to enlighten students without overwhelming them would be the responsibility of the professors. They could build in questions that would prompt reflection and discussion, and have those discussions shared classwide, campuswide, or worldwide. Students could keep online records of all their notes, thoughts, and readings; and, unlike with traditional college texts, they could find, collate, and link to those notes and records forever.
Many entrepreneurs are already inventing software that allows the quick and fertile connection of one’s ideas and those of others, but an all-digital campus would provide a powerful incentive to develop those programs even faster and take them further. Various all-digital campuses could collaborate to develop specifications for such helpful software as well as open-source tools.
Sure, it will take some transition time to get to the all-digital college, but the advantages are many.
First, we would wean students (and scholars) off the physical books of the past, just as they were once weaned off scrolls when new and more efficient technology came along. I have heard all the arguments for the physical book, from the “feel of the page” to the effects of “printed vs. on-screen words” to the “way we take in information” to the fact that “a book lasts a long time.” But those arguments are unconvincing when weighed against the many advantages of going all-electronic. Far better than having colleges preserve the use of physical books for certain advantages would be for colleges to find ways to ensure that we can achieve all the results we want with the integrated tools of the future.
Second, books—and commentaries on books—would start to be connected in ways they aren’t now. We could actually search for the source of a particular quote, or for comments on particular ideas and passages, in ways we can’t even begin to do today. Yet the integrity of the individual work would still be preserved.
Third, and I believe this to be the greatest advantage, ideas would be freed from the printed page, where they have been held captive for too many centuries. In addition to being a dissemination mechanism and an archive, the physical book is, in many ways, a jail for ideas—once a book is read, closed, and shelved, for most people it tends to stay that way. Many of us have walls lined with books that will never be reopened, most of what is in them long forgotten.
But what if all those books were in our pockets and could be referred to whenever we thought of them? The idea of having one’s own personal library of physical books, so useful in earlier times, is no longer worth passing on to our students; the idea of building a digital pocket library of books that students could visit and revisit at any time certainly is.
Colleges and professors exist, in great measure, to help “liberate” and connect the knowledge and ideas in books. We should certainly pass on to our students the ability to do this. But in the future those liberated ideas—the ones in the books (the author’s words), and the ones about the books (the reader’s own notes, all readers’ thoughts and commentaries)—should be available with a few keystrokes. So, as counterintuitive as it may sound, eliminating physical books from college campuses would be a positive step for our 21st-century students, and, I believe, for 21st-century scholarship as well. Academics, researchers, and particularly teachers need to move to the tools of the future. Artifacts belong in museums, not in our institutions of higher learning.
So will your campus be the first to go bookless? It’s a risky step, certainly, but one that will attract forward-thinking students and professors, and be long remembered.
Marc Prensky is an educational author and software designer. His book Teaching Digital Natives: Partnering for Real Learning was published by Corwin in 2010. His next book, From Digital Natives to Digital Wisdom, will be published by Corwin in January.
9/19/2011 from the
comes this article from Great Britain, Lloyd Shepherd ponders some of the same issues as we librarians do. An interesting if not a bit wordy treatise.
The death of books has been greatly exaggerated
- Radical change is certainly producing some alarming symptoms – but much of the doomsayers’ evidence is anecdotal, and it’s possible to read a much happier story
This time last year, I was metaphorically invited to the only party I’ve ever wanted to be seen at. My first novel, The English Monster, was picked up by an agent, and then by a publisher, Simon and Schuster. It hits the streets in March 2012.
I’ve made it, I thought to myself as I clutched my invite to the most exclusive set of all. I’m going to be a published author.
So imagine my surprise – nay, dismay – to discover that publishing‘s streets were not paved with gold, but stalked by the anxious, the gloomy, the suicidal. “Publishing’s dead!” shouted men in sackcloth on Bloomsbury street corners. I had arrived at the party, but the coats were being handed out, the drink had dried up and the hostess had collapsed.
So I asked myself (somewhat desperately, positively naively): are things really that bad? What is the actual state of book publishing in Britain? Can writers really only look forward to a life of penury? Or should I stick my head in the sand, if only to deaden the sound of commissioning editors weeping into their lattes?
We’re doomed …
If you don’t believe that people are worried, you need only look to the Guardian’s own recent debate at the Edinburgh international book festival, called, efficiently and apocalyptically, “The End of Books?”. One of the contributors was writer Ewan Morrison who, in a piece on guardian.co.uk/books after the event, expounded his view that the printed book will go within 25 years, as readers turn more and more toebooks. What’s worse, these ebooks will collapse in value, because that is what today’s younger consumers want, as demonstrated by the online shift to free news. Publishers are no longer paying advances to authors, or if they are these advances are a fraction of what they were. And all the time the relentless combination of pirating, retail competition and the demands of younger consumers means that the price of every piece of content – a song, a film, a book – trends towards zero.
We are, in summary and to paraphrase a certain Scottish member of the Home Guard, doomed.
Better than you’d think
But hang on a minute. Anecdotally, that’s a pretty awe-inspiring collection of proofs. But the plural of anecdote is not data. What is the data telling us?
According to Nielsen BookScan, the publishing industry standard for book sales data, book sales are pretty healthy, with one significant proviso which I’ll come to. Ten years ago in 2001, 162m books were sold in Britain. Ten years later – a decade in which the internet bloomed, online gaming exploded, television channels proliferated, digital piracy rampaged and, latterly, recession gloomed – 229m books sold. So, a 42% increase in the number of books sold over the last 10 years.
But wait, say the gloomy. What about the cash? Haven’t publishers been forced by avaricious retail giants into a fearsome downward spiral? Discounting has sharpened, but not as much as you’d think. The standard discount on the recommended retail price of a book in 2001 was already at 17.6%. In 2010 it was 26.7%. We’ll return to this later.
Even with this discounting, last year UK consumer publishing drew in sales of £1.7bn, up 36% on 2001. Adult fiction saw an increase of 44%, to £476m; and young adult and children’s fiction, realm of all those pesky copiers and pirateers and downloaders, saw sales more than double to £325m.
So why the very, very deep uncertainty and the gloom? Because 2011 is the year this may all change. Here’s the proviso on the sales figures I mentioned. These numbers above do not include any ebook sales at all. Nielsen BookScan hasn’t yet finalised its tracking of ebooks, and the year to date has seen a drop in printed book sales against 2010. But again, not as much as you’d think. Up to the week ending 13 August, overall sales were down almost 6% on 2010 in volume terms, and just over 4% in value.
Ebooks: death or glory?
The question – the defining question – is whether that gap is being filled by ebooks. David Walter, research and development analyst at Nielsen BookScan, told me that the 2011 decline was at least “partially” down to the transition to ebooks, and also mentioned the general economic climate and the reduction in the number of retail booksellers. But there are no numbers against that. Not to put too fine a point on it, we just don’t know. So can we perceive yet what impact ebooks may be having?
We must look to the US for the early signs. Ewan Morrison states in his piece that “Barnes and Noble claims it now sells three times as many digital books as all formats of physical books combined.” Well, not quite. That figure is for online sales through bn.com only. In its most recent quarterly sales report, B&N reported an overall increase in sales at bn.com of over 50%. For the year, Barnes and Noble’s total sales across all its business were up 20% to a record $7bn. But Barnes and Noble is still losing money ($59m in the fourth quarter), for the good reason that it’s struggling to compete with the new, very big, very scary kid on the block: Amazon.
Ah, yes. Amazon. The boogeyman. A company now worth almost $90bn (£55bn). If you’re an independent bookseller, Amazon must look like a cold, relentless stealth bomber casting its shadow over the pavement outside. But to the publisher and the writer, don’t things in Amazonia look rather different?
For one thing, people are buying more and more books in Amazonia, and more and more of them are on Amazon’s ebook platform the Kindle. In May this year, Amazon announced that, for the first time, it was selling more Kindle versions of books than paperback and hardbacks combined, and (here’s the thing that doesn’t get quoted so often) sales of print books were still increasing.
Amazon also announced that, in the year to May 2011, it had seen the fastest year-on-year growth rate for its US books business, when expressed in volume and in dollars. This included books in all formats, print and digital. In the UK, less than one year after opening its UK Kindle store, Amazon.co.uk is selling more Kindle books than hardcover books. And again, this is while hardcover sales continue to grow.
Let’s not be naive. Any retail channel that ends up being dominated by one player will end up squeezing its producers; just ask a farmer. But Amazon is, right now, giving people what they want: competitive pricing, rapid delivery, massive choice, good customer service. And it’s selling books. A lot of books.
The rush to zero
So, what about discounting? Amazon is undercutting, goes the cry, selling cheap and devaluing the product. And don’t get us started on Tesco …
Is this true? The discounting has increased, no doubt; but the average cost to the consumer of an adult fiction book in 2010 is only 30p lessthan in 2001. That figure will be higher when inflation is accounted for, but it’s not slashed-and-burned; it means a fiction book still sold for £6.11 in 2010, on average.
There is a deeper, much more existential concern: that, basically, all readers are ultimately freeloaders and want to get books for free, and that the transition to digital devices will see an explosion in piracy and a collapse in pricing. The evidence for this is … well, I’m not sure what the evidence is, to be frank. Newspapers, it is said, are being destroyed because of people’s appetite for free news. And we all know what happened to music, don’t we? Those cockamamie teenagers ruined everything by downloading the stuff illegally.
But where is the evidence that this will happen in the same way with books? One reason the music industry got so badly hit was that it took the devil’s own time putting a viable digital distribution mechanism in place; then along came iTunes and, lo and behold, people download less music illegally where they have the tools to download it legally. It is certainly true that rock stars are no longer going to be buying up chunks of the home counties, but wasn’t that in itself an anomaly that lasted barely two decades? New music acts are still being signed, new music is still being produced: arguably more of it, or a greater variety, than ever before.
Meanwhile, in Amazonia, Kindle versions of new books are outselling hardback versions – at similar prices. So is there not another view: that people are paying relatively high amounts for books a year before their paperback release, because they want them quickly on their digital devices? That convenience trumps pricing and format every time? There are significant and important complaints about the agreements established between major publishers and Amazon over the pricing of ebooks, and this will no doubt go through significant changes (although you won’t get any publisher to discuss ebook pricing with you). For now, people are voting with their wallets. They’re buying books.
So the data, at least, shows that book sales are in pretty good health, with the proviso that, in 2011, the data is out of step with the buying habits and we won’t know the true picture for a while, although early indicators from out of the US indicate that things look pretty good.
The impecunious author
So what about the other side of the coin? What impact is this change likely to have on authors? Ewan Morrison argues that author advances have collapsed:
”With the era of digital publishing and digital distribution, the age of author advances is coming to an end … The Bookseller claimed in 2009 that ‘Publishers are cutting author advances by as much as 80% in the UK’. A popular catchphrase among agents, when discussing advances, meanwhile, is ’10K is the new 50K’. And as one literary editor recently put it: ‘The days of publishing an author, as opposed to publishing a book, seem to be over.’”
Remember, though: the plural of anecdote is not data. Agreements between authors and publishers are confidential things; any evidence for a decline in advances is entirely anecdotal. That said, things do seem to have changed. Fewer, bigger advances are gravitating towards books which spark debate, which generate conversation, which (and this surprised me) tend towards the more literary end of the spectrum, where books stay in print for longer and sell copies over years, not weeks. Meanwhile advances for genre and commercial fiction do seem to have fallen back from the highs of the 1980s and 1990s.
According to Kate Pool, deputy general secretary at the Society of Authors: “The average advance probably has gone down. The number of commercially marginal books which are no longer commissioned/accepted by publishers when offered on spec, has gone up.”
On the other hand, authors are not seeing a sudden collapse in their incomes. The Society of Authors did a survey in 2000 that showed the average annual figure was £16,600; only 5% of authors earned over £75,000; 75% earned less than £20,000. A more recent survey, done bythe Authors Licensing and Collecting Society, came up with very similar figures.
So where does this sense of authors being squeezed come from? It could simply be a sign that publishing, as an industry, is becoming more commercial, more competitive, more efficient. You may not like that. You probably don’t. There is a profound queasiness which breaks out at the conjunction of art and business. But the pressure is definitely there. As Maxine Hitchcock, editorial director at my publisher Simon and Schuster puts it: “You’ve got to publish harder and more nimbly than ever before.”
There is another pressure on writer’s incomes. It seems that there are more writers to go around. Last month, membership of the Society of Authors passed 9,000 people for the first time since the Society was formed in 1884. There has been a steady increase in the number of book titles published in the UK, from almost 110,000 in 2001 to just over 150,000 in 2010. More surprising, perhaps, is the Nielsen Bookscan data on the number of new publishers each year in the UK and Ireland. What this actually records is new entities applying for ISBN records in each year. In 2001, there were 2,248 such new entities. In 2010, there were 3,151 of them. Nielsen Bookscan has this quite interesting thing to say about that increase: “The year-on-year increase between 2001 and 2010 shows that last year’s figure is the highest in this period and can be explained by the fact that many new authors continue to publish their work under their own publishing name.”
And I’ll bet that there are more titles available today from more authors than at any other time in history. So, even if people were buying as many books today as they were a decade ago, the average writer’s income would be falling. Now, that may not be good for the average writer – but it might be a good thing for society as a whole.
Onwards to a glorious future?
What does all this data add up to? Hardly an industry in its death throes, so one must ask why there are so many long faces about the place. Let’s not be naive. These are times of massive change, and change is never, ever comfortable. The retail sector worries publishers and authors alike; in the past year, publishers have lost Woolworth, Borders and British Bookshops as sales channels and, as Kate Pool from the Society of Authors says: “The increasing dominance of Amazon (as retailer, increasingly as publisher, as owner of the Kindle, etc) is potentially very worrying.”
This, combined with the emergence of digital technology, creates enormous uncertainty. It’s a fact that the transition to digital devices will mean greater efficiencies and more focus on cost and, overall, a rather less generous publishing industry than before; a rather colder-hearted, fiercer one. The old world is fading, the new world isn’t yet in focus. When newspapers and music faced this moment, there was a significant tendency to become hugely angry that the old world in which we were all so comfortable was being “swept away”. It’s almost impossible for someone who has spent decades working in a calm, creative environment not to be enraged by the sight of American technology companies tipping everything on its head.
But let’s not overdo things. Let’s not lose sight of the data we have, and let’s not invent data when we only have anecdotes. And finally, let’s not forget the wonders this new world opens up. Being able to download a book to read instantaneously wherever you are is a thing of wonder, after all (and there is some anecdotal suggestion that people are coming back to books via new digital platforms).
For authors, the chance to reach out to readers, instantly and effectively, is changing the way titles are marketed and delivers a glorious independence that comes with having your own digital presence to curate and to shape. There are new creative opportunities offered by interactive technologies. There is the chance to play in a world where books and stories can be either the private, cherished experience of old or a public, shared conversation with other readers from across the world.
So yes, the party’s still on. It’s not quite the same party, the drink’s a good deal cheaper and we’ve got crisps, not caviar. But there are more people invited, and some of them look pretty groovy. I’ll not get my coat just yet.
Lloyd Shepherd’s debut novel, The English Monster, is published in March 2012. He hopes there will still be people around to buy it.
09/05/2011 From the blog Book to Book comes this Oscar Wilde tribute on Library Thing. Pretty cool, hm?
Oscar Wilde’s library reconstructed on LibraryThing
This LibraryThing catalogue is a tribute to Oscar Wilde and Thomas Wright’sOscar’s Books: A journey around the library of Oscar Wilde (known in the U.S. asBuilt of Books: How reading defined the life of Oscar Wilde). I have attempted to list most of the books mentioned in this literary biography; all the credit for the research must go to Thomas Wright and the scholars who came before him.
Oscar Wilde (1854-1900) is best known as a playwright and wit. Wilde was also a poet, classical scholar, essayist, novelist, critic, book reviewer, short story writer, journalist, gay man, husband and father. To European contemporaries who knew him mostly through his essays, Wilde was a philosopher and a leading figure of the Symbolist and Decadent movements. To his English contemporaries, Wilde was a dandy, bon vivant and social climber, a subversive Irishman and the butt of many jokes about Aestheticism. The English later came to think of Wilde as a degenerate criminal to be reviled, or as a case study in the dangers of Art. To me, he is a source of inspiration and a hero.
In Oscar’s Books, Wright details what books were in Wilde’s collection and what role they played in his personal and artistic development. Wright’s work to reconstruct Wilde’s library and track down volumes that he owned is poignant because the books were auctioned off — along with all the Wilde family’s possessions — when Wilde was arrested.
I hope that this LibraryThing catalogue will inspire you to read Thomas Wright’s book, as well as books from Wilde’s library. To understand the significance of the books in Oscar Wilde’s collection and the context in which he read them, I recommend:
- Oscar’s Books (hardback edition: Chatto & Windus 2008, ISBN 9780701180614; paperback edition: Vintage 2009, ISBN 9780099502722). Read reviews of Oscar’s Books by The Guardian, The Independent and Literary Review.
- Son of Oscar Wilde by Vyvyan Holland
- Oscar Wilde by Richard Ellmann
- Oscar Wilde by John Sloan (Oxford World’s Classics Authors in Context series)
- The Picture of Dorian Gray: A Norton Critical Edition edited by Michael Patrick Gillespie
- The Picture of Dorian Gray: An Annotated, Uncensored Edition edited byNicholas Frankel.
8/31/2011 From Tech Crunch come the following observations from Paul Carr.
The Golden Era Of Books Isn’t Over. The Golden Era Of Books Is Now
“The golden era of books is over.” So begins Jeff Bercovici’s post on Fortune.com before he — somewhat self-contrapuntally — goes on to list the top earning authors of the previous year, including James Patterson ($84 million), Danielle Steel ($35 million) and Stephen King ($28 million).
In fact, compelling as Bercovici’s woe-is-books lede is, the stat he uses to back it up – that sales of adult hardcover books are down 23% – is somewhat, well, silly. For reasons I’ve explained before, measuring the state of “books” based on the number of hardcover sales is like measuring the popularity of “music” based on how many people are buying cassettes.
Once upon a time, hardcover books were the only way that book lovers could read new titles. This allowed publishers to charge a premium for a product — a big, shiny hardback book — that actually isn’t much more expensive to produce than a paperback. Today, most publishers release the ebook edition of a new title at the same time as a hardback. Ebooks are a cheaper, more portable, quicker way for fans to get hold of their favourite author’s latest work so it’s absolutely unremarkable that hardcore book buyers are migrating to that format. Sure enough, hardback sales have dipped in the past 12 months but, in the same period, ebook sales have soared. In terms of both unit sales (up 4.1% from 2008) and revenue (up 5.6% from 2008), American publishers experienced a bumper year last year.
And the good news doesn’t stop there: thanks to the Kindle and the iPad, people who three years ago would never have strayed within 500 feet of a bookshop (and still wouldn’t) can now buy the latest James Patterson as easily as downloading Angry Birds. People who weren’t reading for pleasure, now are. This is good.
Even more interestingly, Amazon has extracted from amber the DNA of pamphlets and short stories (and maybe even serial novels) and given them a chance at new life in the form of Kindle Singles. A whole series of startup publishers — most notably Byliner, whose debut title Three Cups of Deceit made headlines in April — have launched to feed the reading public’s hunger for essays and long-form journalism in ebook form. Twelve months ago, long-form journalism was being kept alive on ventilators — today, it’s thriving on the Kindle Single bestsellers list. Hell, Ars Technica made $15,000 in a single day after publishing their review of some Apple thing or other as a Single.
So, yes, given that the publishing industry is thriving, new formats are emerging, dead formats are coming back from the grave and top flight authors are making tens of millions of dollars a year, it’s something of a stretch to argue that the golden era of books is over. Moreover, it’s considerably less of a stretch to argue that the golden era of books is now.
08/17/2011 From the website is a discussion of the ability of the print and electronic book to exist side by side
Since digital publishing has exploded in popularity, the dialogue over print media and its existence has been an intense one. For centuries the printed page has been heralded as the keeper and communicator of knowledge. It was an incredibly efficient means of disseminating information quickly and relatively cheaply. But then the digital age came along and now there is a new contender in the battle. And for a while we’ve seen the nature of this debate between print and digital focus on how print can deliver content differently and better than its digital counterpart. Instead of going down that path, we’d like discuss a rarely raised argument for print, from a designer’s perspective.
To begin, we think the winner for content delivery will always be digital. Print has too many restrictions such as cost, format, physical characteristics, and permanence (can’t edit once published) to contend with the vastly more nimble digital method. In digital publishing you can release limitless copies very cheaply, the format is adaptable, engaging, and non-linear, it is often lighter in weight than print, and it can be easily edited and updated with current information.
So if digital is largely more efficient and flexible in everyway, the purpose for print must be found elsewhere. We recently happened upon a perfect example as we were researching something completely different. Above is the New Yorker cover from September 24, 2001, the first issue released after the 9/11 attacks. It was designed by illustrator Art Spiegelman and featured a two-tone black on black illustration of the twin towers, in a humble and stark portrayal of the events that occurred but a few days before. The illustration was inspired by the painter Ad Reinhardt whose black on black canvases feature nearly imperceptible variations in hue as he added blues, greens, and reds, to his blacks and juxtaposed them side-by-side, testing the optical limits of our perception. His paintings are notoriously impossible to reproduce digitally. Any image you see of his online are either manipulated to exaggerate the differences between the hues, or so flat that the work appears as a single black field. Similarly, this New Yorker cover is lost in digital reproduction. Because the true printed matter reveals itself so subtly as you handle the printed piece and adjust it around the light. So to begin, digital imagery and quality, although having its own beautiful characteristics, will simply never match an original printed page.
A second, perhaps more striking example of the benefits of print that we could relate to this same cover is its quality as an objet and artifact. Think back to Sept 12th, 2001. We remember everyone scrambling to buy up every last newspaper and magazine on the shelves which depicted the events from the day before. To our knowledge, we don’t know of anyone who went straight to the internet and started taking screenshots of The New York Times’s website. There is something about the printed material that acts as a satisfying remembrance of whatever it contains. This could be seen as the reverse argument for the “permanence” of the page. Although things can easily be logged away on a hard drive it somehow doesn’t feel as sincere of a gesture when its cluttered with other things. Similarly, when you look at the homepage of a news organization, it will change day-to-day and sometimes hour-to-hour. It is often filled with ads, as well. Contrast that with the beautifully interrupted New Yorker cover you see above, with the single image and title. It has feeling of reverence to its subject. And this isn’t simply for 9/11 coverage — this same principle can apply to anything.
A parallel example we might be able to draw is between photography and videography. When video and cinema were introduced, people were fearful it would destroy the art of photography — that moving image and its thousands upon thousands of variations, would be more appealing to the audience than a static photographic image. And yet photography remains, as popular as ever. Why? Because it forces the perspective for the viewer into a single point in time and creates a single, poignant message or artifact of that image. This could be the same for print. A newspaper or magazine or even a book depicts its contents in a finite medium. There is a clear end (a static image) one can constantly refer to as a salient memory. The video (the web) by contrast, is filled with its constantly flickering images, change, and abundance. If you were to take a still frame from a video, or a screenshot from the web, it would feel less significant than that from a page in a book. Because with video, it is the collection of the whole which makes the subject meaningful whereas the single cover or narrative of a page needs to stand on its own and represent a much larger theme.
Perhaps we’re being a bit too poetic as we defend our dearly beloved book or magazine. As designers, it’s easy to feel an affinity for the object and the experience of flipping through a carefully crafted piece. But we think print need not worry about how it can modernize itself to stay relevant. It isn’t about competing with digital. The task is to supplement it. By the very nature of the medium it will be in demand as a way to commemorate stories, events, or images.
What do you think? Is this a viable means for print’s survival? Or is it still doomed for failure, regardless?
8/13/2001 In the Campus Technology web site John Waters discusses the use of E-Textbooks so where does that leave the future of the printed textbook?
Learning Tools | Feature
E-Textbooks: 4 Keys to Going All-Digital
- By John K. Waters
When Daytona State College, a 53-year-old former community college in Florida, now a state college offering a four year degree, set out to implement an all-electronic book program two years ago, its goal was to drive down the cost of textbooks by 80 percent. The school is well on its way to achieving that goal, and along the way it made some discoveries about what it takes to make a successful transition to e-texts.
“We got it going in the right direction,” said Rand Spiwak, CEO of eText Consult and Daytona State’s recently retired CFO, who led the school’s e-text project. “But we had to adjust our expectations and assumptions considerably.”
Spiwak partnered with John Ittelson, professor emeritus at California State University, Monterey Bay, and director of communication, collaboration, and outreach for the California Virtual Campus, to share their experiences implementing e-textbook programs with attendees at the annual Campus Technology 2011conference in Boston last week. They discussed strategies for evaluating the benefits and cost savings of e-texts over paper textbooks, as well as some basic information attendees would need to pursue e-text implementations at their institutions.
Before starting his own consulting practice, Spiwak spearheaded the Daytona’s e-text Project, which set out to replace traditional textbooks with digital alternatives, including e-textbooks and open content, for the entire school. He shared his experiences with conference attendees, and a list of essentials for any institution considering a transition to e-texts.
“We found at the end that our initial idea was very different from where we needed to be to make this thing work,” Spiwak said. “We thought we’d have one device, deal with one publisher–every one of those early ideas were a mistake.”
What do you need to implement a successful e-text program?
Start with a cross-platform e-reader software that will run on any device, Spiwak said.
“If the way our students read their e-text was based on where they bought the book, proprietary from a publisher or to the device, it would have been like asking them to manage five e-mail systems,” he said. “It just wouldn’t have worked. You want e-reader software that will run on any device, that will work with any publisher, both proprietary content and open content, and we found it best go with a third-party to provide that service. Agnostic of hardware. Very different from where we were two years ago.”
Daytona also came to the conclusion that a successful e-text program would have to embrace technology integration.
“We wanted to make sure that, whatever happened with the tech, the student was left hanging with an e-book that he or she could no longer read,” Spiwak said. “We wanted something that was open enough that, when changes in technology took place, the student could take advantage of it, or stick with what they had. We didn’t want it to be like the slide rule users going to calculators, complete replacement all at once.”
Daytona also abandoned its initial assumption that all of its 1,600 full-time and part-time faculty and 40,000 students would make the transition to e-text simultaneously Aug. 15, 2012. “It’s doesn’t work that way,” he said. “Even though we had a faculty that was very interested in making this work, we figured out that you do it like you eat an elephant: one bite at a time.”
Daytona rolled out its e-text program first with a small group of “pioneers,” faculty who actually approached the administration to volunteer. The students in those classes knew in advance that they would 100 percent digital. That group of faculty then mentored other faculty members who wanted to make the transition. Between 10 percent and 15 percent of the Daytona State faculty came into the program per semester, strictly on a volunteer basis.
“We decided not to shove this down anyone’s throat,” Spiwak said. “When students saw their textbook costs drop, they demanded it, and faculty responded.”
Daytona also found that, by guaranteeing publishers 100 percent sellthrough–that is, all students in a class would be required to pay for the text for that class upfront, something like paying a lab fee–the school had enough leverage to get the cost of the e-texts down by at least 60 percent, far less than rentals or even used texts.
“The publishers liked that idea,” Spiwak said. “A lot.”
“Many of our [community] college transfer students were spending more for textbooks–new, used, and rental; any combination–than they were spending on tuition,” Spiwak concluded. “That’s pathetic, and we knew we had to solve that problem…. By bringing the cost down of textbooks, we have really opened the door to many adults to higher education who might not have come at all. Our enrollment, with nearly the same head count, our FDT grew almost 20 percent. Because students were completing more classes, our retention rate in some developmental classes went from a miserable under 50 percent to a very positive 83 percent. Not many schools retain that many students from semester to semester, especial in college prep classes.”
About the Author
John K. Waters is a freelance journalist and author based in Palo Alto, CA.
8/11/2011 In this lengthy Research study from First Monday, Volume 16, Number 8 – 1 August 2011 comes a discussion with charts to demonstrate the international impact of the presence of Wikipedia in scholarly publications, which has been growing steadily. As librarians how do we feel about this influence and is there anything that we can do to stem the flow of questionable information finding its way into scholarly works?
Publications in the Institute of Scientific Information’s (ISI, currently Thomson Reuters) Web of Science (WoS) and Elsevier’s Scopus databases were utilized to collect data aboutWikipedia research and citations to Wikipedia. The growth of publications on Wikipediaresearch, the most active researchers, their associated institutions, academic fields and their geographic distribution are treated in this paper. The impact and influence of Wikipedia were identified, utilizing cited work found in (WoS) and Scopus. Additionally, leading authors, affiliated institutions, countries, academic fields, and publications that frequently citeWikipedia are identified.
No one denies that Wikipedia is now a highly used, albeit controversial, information source.Wikipedia has become increasingly an important tool for “fact–checking” (Kniffel, 2008) as well as a topic of research because of its convenient access on the Web, its coverage, and the nature of large–scale collaborative work, among other reasons. According to WorldCat (24 August 2010), Wikipedia has been a topic of more than 50 theses and dissertations worldwide and has been a subject of more than 200 monographic publications.
The purpose of this study is to explore the extent of Wikipedia’s presence in scholarly publications in Web of Science (WoS) and Elsevier’s Scopus databases. The Institute for Scientific Information (ISI), publisher of WoS, asserts that it contains the world’s leading citations from multidisciplinary coverage of over 10,000 high–impact journals in the sciences, social sciences, and arts and humanities, as well as international proceedings coverage for over 120,000 conferences. WoS covers Science Citation Index Expanded indexing over 6,650 major journals, Social Science Citation Index containing over 1,950 journals and Arts and Humanities Citation Index for 1,160 of the world’s leading arts and humanities journals .Scopus states that it contains 18,000 titles from more than 5,000 international publishers, including 16,500 peer–reviewed journals in addition to about 1,200 open access journals, 600 trade publications, 2,350 book series, and 3.6 million conference papers among others . Some differences in WoS and Scopus databases should be noted. The scope and types of publications included in WoS and Scopus differ and this should be taken into account in understanding the search results and interpretations. It is clear that WoS covers journals more selectively while Scopus covers a much higher numbers of conference papers. A recent study on the journal title overlap between WoS and Scopus databases reported that about 45 percent of titles in Scopus are not covered in WoS, while 16 percent of titles in WoS are not covered in Scopus (Gavel and Iselid, 2008).
Wikipedia: About page defines Wikipedia as a multilingual, Web–based, free–content encyclopedia project based on an openly editable model. Anyone can contribute and edit theWikipedia articles. Users can contribute anonymously, or under a pseudonym, or with their real identity. The page history view (revision history or edit history) includes a list of the page’s previous revisions, including date and time, the user name (or IP address) and edit history. However, Cohen (2009) reported that the English Wikipedia added an imposing layer of editorial reviews on articles about living people declared no longer available in the openly editable mode. Since its inception in 2001, Wikipedia has published 17,000,000 articles. There are currently 91,000 active contributors, and Wikipedia is now available in 270 languages. The English Wikipedia alone includes more than three million articles, 23 million pages, more than 446 million edits and is attracting 79 million visitors monthly on the Internet, as of January 2011. Wikipedia was founded as an offshoot of Nupedia, founded by Jimmy Wales and officially launched on 15 January 2001.
Among approximately three million articles in the English Wikipedia, there are about 3,194 (about 0.1 percent) featured articles. Featured articles represent the best articles which, according to Wikipedia’s featured list criteria, have undergone a thorough review process byWikipedia’s editors to meet the highest standards for usefulness, completeness, accuracy, neutrality and style. A featured article has a small bronze star icon on the top right corner of the article’s page. Citing a study conducted by researchers at the Carnegie Mellon University and Palo Alto Research Center, the Wikipedia’s site lists the most frequently covered topics — Culture and the arts (30 percent), Biographies and persons (15 percent), Geography and places (14 percent), Society and social sciences (12 percent), History and events (11 percent), among others. Spoerri (2007) examined the popularity of topics in Wikipedia and found the most popular Wikipedia pages were related to entertainment and sexuality. Popular pages appeared to be related to search engines, especially Google. The site reports that the growth of the English Wikipedia in terms of new articles and contributors reached a plateau in early 2007. Landgraf (2009) also reported a reduction in Wikipedia’s growth. Kopytoff (2011) reported on the celebration of Wikipedia’s tenth anniversary and mentioned plans to increase the number of foreign language articles by opening an office in India, then possibly Egypt and Brazil. Plans also include the recruitment of a wider range of contributors — more women, elderly, and, to add more graphical content, museum experts.
Reviews in the library and information science literature indicated that Wikipedia itself has increasingly become a subject of research from diverse academic disciplines due to its exceptional scale and utility (Medelyan, et al., 2009). The concept of “information quality (IQ),” incorporating collaboration, evolving debates, and process as assurance, was studied using Wikipedia as an example (Stvilia, et al., 2008). A Wikipedia entry, Wikipedia: Academic studies of Wikipedia (http://en.wikipedia.org/wiki/Academic_studies_about_Wikipedia) reports a partial list of academic writings about Wikipedia reported in journal articles, and conference proceedings among other formats and the Academic studies about Wikipedia page includes some Wikipedia research in peer–reviewed publications.
A question about Wikipedia’s quality and reliability as an information source has been one of the most frequently investigated research topics. In an evaluation of Wikipedia as a reference source, applying the classic reference evaluation criteria — purpose, authority, scope, audience, cost, and format — Danny P. Wallace and Connie Van Fleet (2005) concluded that Katz’s criteria for reference sources do not stand up well to Wikipedia. A comparison ofWikipedia and other encyclopedias in historical entries revealed that Wikipedia’s accuracy was 80 percent compared with 95–96 percent accuracy in other sources (Rector, 2008). A special report by the prestigious weekly journal Nature (Giles, 2005) raised commentary fromEncyclopedia Britannica. Nature’s investigation, based on 42 science entries, found that bothWikipedia and Britannica contained numerous errors, but the difference in accuracy was not great. The average inaccuracy rate in Britannica was about three per article while Wikipediacontained about four. The number of edits, collaborators, and edit patterns were studied in relation to article quality. Wilkinson and Huberman (2007) compared the number of edits and contributors to the 1,211 “featured” articles to the same number of other articles to test the correlation between number of edits and article quality. They concluded that Wikipedia article quality appeared to increase on average as the number of collaborators and number of edits increase. Revising patterns — the total number of editors, the number of edits, and the number of major and minor edits — in a sample of two groups of articles were studied to determine their relationship to article quality (Poderi, 2009). The study reported that not every contribution had the same weight and major edits were not necessarily contributing to article quality. The role of main editors differed in the two groups of article. The articles in the group with a high presence of main editors tended to become featured articles more easily. Other aspects of quality such as Wikipedia’s biased coverage and lack of cited sources were identified as “Wikipedia risks” (Black, 2008). Nielsen (2007) examined about 30,368 outbound links in Wikipedia’s science entries. Although the number of linked citations to scholarly literature was small compared to the number of citations found in scientific journals,Wikipedia showed a slight tendency to cite articles in high–impact ISI journals. For example, the largest number of citations were to Nature, Science and the New England Journal of Medicine in the sample studied.
Coverage of philosophers in the twentieth century listed in Wikipedia and in two other widely used online resources was compared for data regarding their birth date, gender, national and disciplinary backgrounds. This study found that Wikipedia contained more entries for living and ‘minor’ philosophers than traditional resources (Elvebakk, 2008). The semantic coverage of the English Wikipedia was studied and represented in terms of baseline statistics for articles, subject categories, and the top 10 authors (Holloway, et al., 2007).
Use of Wikipedia is on the rise. While some university professors have banned usingWikipedia as a research source (Cohen, 2007), use of Wikipedia was promoted using epistemic values. Fallis (2008) argued that there were good epistemic consequences of usingWikipedia as a source of information by illustrating some empirical examples. Epistemic values such as power, speed, immediate availability, wiki technology, the wisdom of crowds, and Wikipedia policies were noted as outweighing the deficiencies in the reliability ofWikipedia. Despite controversies, use of Wikipedia by academic communities has been expanding. More positive responses to Wikipedia have been reported from academic libraries. For example, libraries at the University of Washington, University of North Texas, and Wake Forest University, among others, have decided to participate in Wikipedia by editing, adding links, or writing new articles (Lally and Dunford, 2007; Pressley and McCallum, 2008). Lim’s (2009) survey on college students’ use of Wikipedia also showed that students use it as a source for quick fact–checking and for finding background information. Student’s perceptions regarding information utility and their positive emotions toward Wikipedia were related to their usage level. Use of Wikipedia in college class room has been reported. One ofWikipedia’s recent projects, Public Policy Initiative (http://outreach.wikimedia.org/wiki/Public_Policy_Initiative), became a teaching resource in some universities . For example, five universities — Georgetown, George Washington, Harvard, Indiana University and Syracuse — were invited to work on editing articles on the policy–related entries in Wikipedia to improve the article quality.
Citation counts in scholarly publications have been frequently used as an important tool — to assess the relative scholarly impact of research, diffusion of new research ideas, to study journals, individual researchers, and to identify maps of scholarly communication across scientific specialties and so on (Meho and Sugimoto, 2009). Cronin and Shaw (2007) used bibliometric tools to identify Kling’s intellectual impact and network using his publications, his cited works, and acknowledgment data. Others studied citing behaviors and motivations of citers besides scientific impact (Bornmann and Daniel, 2008). For the citation counts, ISI databases (such as WoS), Scopus and Google Scholar are the most often used tools. ISI’s three citation databases were the only comprehensive citation data source until Elsevier’sScopus and Google Scholar were launched in 2004. In a paper comparing the citation counts provide by WoS, Scopus and Google Scholar for articles from the Journal of the American Society for Information Science and Technology, Bauer and Bakkalbasi (2005) conclude thatGoogle Scholarr likely retrieves traditional journal articles which are also possibly covered byWoS and Scopus in addition to unique citations. However, the coverage of scholarly publications was the least in Google Scholar.
The visibility of Wikipedia in scholarly communications was examined based on the following questions:
- How many times Wikipedia has been a topic of research in scholarly publications covered in WoS and Scopus databases?
- Who are the contributors most often engaged in doing research about Wikipedia?
- What are these authors’ institutional affiliations?
- Which publications have published studies on Wikipedia most often?
- Which academic fields are engaged in studying Wikipedia most frequently?
- How often Wikipedia has been cited in the scholarly publications covered in WoS andScopus databases?
- Who cites Wikipedia most often?
- Which publications cite Wikipedia most frequently?
- Which academic fields cite Wikipedia most often?
- Authors from which institutions most frequently cite Wikipedia in their publications?
Two types of data were collected to examine the visibility of Wikipedia in scholarly publications. The presence of Wikipedia in scholarly publications was assumed if a study’s major topics include “Wikipedia,” or “Wikipedia” has been used in their references. A search in WoS using Wikipedia in the topic OR title field was conducted in January 2011 to find the number of records for which a publication’s topic is Wikipedia. A truncated search was used to match any variations and to achieve a more comprehensive search result. In the same way a search in Scopus in the title, abstract, or keyword fields was conduced. There were 291 records in WoS and 1,455 in Scopus with topics including Wikipedia. Scopus allows a search conducted beyond its own databases by providing Web searching options. Yet, this research was limited to Scopus alone as it includes only peer–reviewed publications.
The search result displays typical citation information including author(s), title (document), source title, its volume and number designation, pagination (if available), and publication year. From the search in WoS, all search results were selected to display the list of publications with a main topic on Wikipedia and to refine the result using ISI analysis tools. These analysis tools allow the search results to be sorted by ranked order for a selected field (e.g., author, institutional name, country, etc.). For example, Brendan Luyt and Oded Nov have published most frequently on Wikipedia in scholarly publications covered by WoS. A search result in Scopus displays ranked lists of each field, for example, by source title, author name, publication year, affiliation, subject area, document type, etc. Advanced search features in Scopus were utilized for more precise and comprehensive searching. For example, a search combined with the field “affilcountry” (United States or US) displays publication output by researchers affiliated with institutions located in the United States. An “affilorg” (Hong Kong) brings additional research output by researchers affiliated with institutions in Hong Kong. An advanced search combined with “subjarea” (comp) shows the number of documents categorized as computer science.
To examine aspects of Wikipedia’s impact, a search for “cited work = Wikipedia*” was conducted in WoS. In a “Cited Reference Search”, all references in the WoS databases that cite Wikipedia were retrieved. The search result listed cited author(s), cited work (Wikipedia), year (if available), and the number of times cited for a specific article. There were 340 records citing Wikipedia in WoS. Once the search is executed, all entries which cite Wikipediaare selected, then the search is finished. One should note that the number of citing articles on the “Cited Reference Search” page and the number listed in the “Times Cited” count on the results page after finishing the search might differ depending on the scope of one’s institution’s subscriptions to various databases within WoS. The “Times Cited” count on the results page are counted from all the databases in WoS: Science Citation Index Expanded, Social Science Citation Index, Arts and Humanities Citation Index, Conference Proceedings Citation Index–Science, and Conference Proceedings Citation Index–Social Sciences and Humanities. For example, if an institution has a subscription to the Science Citation Index Expanded and Social Science Citation Index, Arts and Humanities Citation Index but not theConference Proceedings Citation Index, the number of citing articles on the “Cited Reference Search” page may be smaller. In addition, the result may be influenced by one’s subscription periods. If an institution has access to a limited time period such as from 2005 to the present, the result would probably be smaller.
Larry Dossey and Brendan Luyt cited Wikipedia most often in their scholarly publications as noted in WoS. In a similar way, a search for “refsrctitle = wikipedia*” was conducted inScopus for publications with the source title (Wikipedia) in references. There were 3,339 records citing Wikipedia as source titles in references in Scopus. All search results were downloaded into an MS Excel file for data analysis.
There were, as of January 2011, a total of 1,746 publications in WoS and Scopus for the period 2002 to 2010, which contained research about Wikipedia. The number should be taken with caution due to overlapping coverage of publications between WoS and Scopus as noted earlier. Furthermore, these numbers may change as the coverage of publications in the WoSand Scopus databases is updated.
To achieve a more precise measurement of research production by country, Hong Kong was searched separately and added to China’s production for data analysis. China’s production included three additional publications from Hong Kong in WoS. Likewise, for the United Kingdom, additional searches were conducted in WoS for England, Scotland, Wales, and Northern Ireland. Four publications from Scotland were added to the United Kingdom’s total. In a similar manner, an additional country search for Hong Kong in Scopus added 18 more publications to China. The country name, United Kingdom, was used consistently in Scopus for all publications affiliated with that nation. Table 1 lists the most productive countries inWikipedia research. The most productive countries were the U.S. and the United Kingdom inWoS and the U.S. and Germany in Scopus. The next most productive countries were China, France, the United Kingdom, Japan, Italy and the Netherlands in Scopus. The U.S. is far stronger in producing research on Wikipedia than any other country, accounting for about 22 percent of the publications in Scopus and about 37 percent in WoS.
Table 1: Research production on Wikipedia by country. Country Number of publications in WoS(percent) Country Number of publications in Scopus(percent) United States 107 (36.8) United States 315 (21.6) United Kingdom 25 (8.6) Germany 137 (9.4) Germany 22 (7.6) China(including Hong Kong) 99 (6.8) Canada 13 (4.4) France 69 (4.7) Australia 12 (4.2) United Kingdom 65 (4.5) China(including Hong Kong) 12 (4.2) Japan 64 (4.4) France 11 (3.8) Italy 57 (3.9) Italy 9 (3.1) Netherlands 57 (3.9) Spain 9 (3.1) Australia 55 (3.8) Netherlands 9 (3.1) Spain 50 (3.4) Singapore 9 (3.1) Canada 42 (2.9)
Analysis of author productivity, based on the number of publications included in WoS andScopus, indicated that a small numbers of authors created a number of publications. There were 291 publications with a total of 701 authors in WoS and 1,455 publications in Scopus with a total of 3,940 principal and collaborative authors with a research topic including Wikipedia. Multiple authorship was the norm. For example, one publication about Wikipedia research inScopus was coauthored by 37 individuals.
Two individuals wrote 13 papers, another two researchers contributed 12 publications, and six published 10 items about Wikipedia. Altogether, 123 individuals wrote more than four publications on Wikipedia. The most highly productive 15 individuals, their affiliated institutions, countries, and the number of their publications are listed in Table 2. Individual researchers who developed the most frequent publications were affiliated with institutions located in Europe and Asian countries. Jaap Kamps at the University of Amsterdam (http://staff.science.uva.nl/~kamps/) and Gerhard Weikum of the Max–Planck–Institut für Informatik (http://www.mpi-inf.mpg.de/~weikum/) each wrote 13 articles dealing with, in some fashion, Wikipedia.
Table 2: Most highly productive authors in research on Wikipedia, based on Scopus. Name Affiliation Country Number of publications Kamps, J. University of Amsterdam Netherlands 13 Weikum, G. Max–Planck–Institut für Informatik Germany 13 Geva, S. Queensland University of Technology Australia 12 Nakayama, K. Osaka University Japan 12 Koolen, M. University of Amsterdam Netherlands 10 Hara, T. Osaka University Japan 10 Kittur, A. Carnegie Mellon University United States 10 Ortega, F. Universidad Rey Juan Carlos Spain 10 Nishio, S. Osaka University Japan 10 Sun, A. Nanyang Technological University Singapore 10 Demartini, G. L3S Research Center Germany 8 Jijkoun, V. University of Amsterdam Netherlands 8 Milne, D. University of Waikato New Zealand 8 Trotman, A. University of Otago New Zealand 8 Witten, I.H. University of Waikato New Zealand 8
Affiliated institution productivity
The majority of researchers on Wikipedia were affiliated with universities. The most productive 15 institutions are listed below in ranked order, in Table 3. Individual researchers affiliated with the University of Amsterdam, Nanyang Technological University and the Max–Planck–Institut für Informatik were the most productive in doing research on Wikipedia. These 15 affiliated institutions contributed 230 publications which were about 13 percent of the total publications in WoS and Scopus. Researchers affiliated with the Carnegie Mellon University and Indiana University were most active in research on Wikipedia in the United States.
Table 3: Most highly productive institutions onWikipedia. Institution Number of papers University of Amsterdam 31 Nanyang Technological University 23 Max–Planck–Institut für Informatik 19 Queensland University of Technology 17 Carnegie Mellon University 17 University of Tokyo 16 Indiana University 15 University of Illinois at Urbana–Champaign 12 Hewlett–Packard Laboratories 12 Osaka University 12 Shanghai Jiao Tong University 12 University of Washington 11 IBM Thomas J. Watson Research Center 11 Georgia Institute of Technology 11 Microsoft Research 11
Academic fields which are most active in Wikipedia research
Table 4 describes 10 academic fields which are most active in Wikipedia research according to WoS and Scopus. Academic fields in this study were defined by these databases respectively. Scopus categorizes its content into 27 subject areas and WoS includes 251 subject areas. Certainly, computer science was the most productive. About 42 percent ofWikipedia research is produced from many areas in the computer science fields and about 26 percent from information and library science in WoS databases while in Scopus about 72 percent of research originates from the computer science category as defined by Scopus. The fields of mathematics, social sciences, and engineering are also highly productive. In Scopusan exceeding small portion of publications, about one percent of Wikipedia research output, derives from the arts and humanities. Note that a publication may be categorized in more than one subject category and thus the total number of publications may include duplication.
Table 4: Academic fields most active in Wikipediaresearch. Academic fields(WoS) Number of publications(percentage) Academic fields(Scopus) Number of publications(percentage) Information science, Library science 74(25.6) Computer science 1,052(72.3) Computer science, Information systems 73(25.2) Mathematics 341(23.4) Computer science, Artificial intelligence 24(8.3) Social sciences 260(17.9) Engineering, Electrical and electronic 19(6.6) Engineering 199(13.7) Communications 16(5.5) Biochemistry, Genetics and molecular biology 109(7.5) Computer science, Theory and methods 13(4.5) Decision sciences 99(6.8) Education and education research 13(4.5) Business, Management, Accounting 85(5.8) Management 13(4.5) Medicine 44(3.0) Computer science, Hardware and architecture 12(4.2) Agriculture and Biological sciences, Arts 16(1.1) Multidisciplinary sciences 10(3.5) Arts and humanities 15(1.0) Physics and Astronomy 15(1.0)
The Leading publications reporting research on Wikipedia
Table 5 rank orders the 11 most productive publications on Wikipedia research in both WoSand Scopus. it appears that more research about Wikipedia has been published in conference papers and proceedings than in journal articles. As conference titles tend to vary frequently, more comprehensive searches for conference publications were conducted. Series such asLecture Notes in Computer Science including the subseries Lecture Notes in Artificial Intelligence; Lecture Notes in Bioinformatics; Proceedings of the International Conference on Information and Knowledge Management; and, International Symposium on Wikis (with slightly variant titles) are the leading outlets for Wikipedia research. Lecture Notes in Computer Science is a major series; a WorldCat search retrieves more than 100,000 items. The International Symposium on Wikis’ Web site reports that it focuses on research and practice about wikis and open collaboration. Thus it appears to be a very appropriate venue for Wikipedia research. First Monday is also highly regarded in publishing Wikipedia research. Because of the coverage differences between WoS and Scopus, Wikipedia research is most often reported in journals in WoS and conference proceedings in Scopus. However, it is noteworthy that the Journal of American Society for Information Science and Technology (JASIST) appears on both lists. The top 11 publications produced about 20 percent of theWikipedia research in WoS compared to about 37 percent in Scopus. It is interesting thatWikipedia research appears to be concentrated in a small number of publications as recorded in Scopus while scattered among a larger number in WoS.
Table 5: Leading serials publishing Wikipediaresearch. Publication as reported inWoS Number Publication as reported inScopus Number Journal of American Society for Information Science and Technology 14 Lecture Notes in Computer Science, including subseries Lecture Notes in Artificial Intelligence andLecture Notes in Bioinformatics 292 Online Information Review 6 Proceedings of International Conference on Information and Knowledge Management 61 Journal of Computer–mediated Communication 5 International Symposium on Wikis 53 Journal of Web Semantics 5 International ACM SIGIR Conference on Research and Development in Information Retrieval 29 BMC Bioinformatics 4 First Monday 26 Computers in Human Behavior 4 AAAI Workshop Technical Report 14 Electronic Library 4 Journal of American Society for Information Science and Technology 14 Information Systems 4 Proceedings of AAAI National Conference on Artificial Intelligence 13 Nature 4 Proceedings of ACM Conference on Human Factors in Computing Systems 12 Information Retrieval 4 Proceedings of ACM Conference on Computer Supported Cooperative Work 12 New Media & Society 4 International Conference on Knowledge Discovery and Data Mining 11
Impact of Wikipedia
Citations to Wikipedia in scholarly publications were examined to test Wikipedia’s impact on scholarly communication. This effort attempted to identify those who cite Wikipedia most often, their affiliated institutions, associated fields, and geographic distribution.
Wikipedia was cited 3,679 times in the WoS and Scopus databases. The 11 researchers who cited Wikipedia most frequently in their scholarly publications were from eight countries. Saou–Wen Su, affiliated with the Lite–On Technology Corporation in Taiwan, cited Wikipediain eight publications; Gerhard Weikum of the Max–Planck–Institut für Informatik citedWikipedia in in seven publications. Table 6 lists individual researchers who cited Wikipediamost frequently in their papers as recorded by Scopus and WoS.
Table 6: Citation of Wikipedia by specific researchers. Name Country Number Su, Saou–Wen Taiwan 8 Weikum, Gerhard Germany 7 Boukerche, Azzedine Canada 6 Ortega, Felipe Spain 6 Ren, Y. United States 6 Ros, L. France 5 Hijazi, H. France 5 González–Barahona, J.M. Spain 5 Milne, David New Zealand 5 Witten, Ian H. New Zealand 5 Wong, K.L. Malaysia 5
Citations to Wikipedia by affiliated institutions
As illustrated below, authors affiliated with institutions in the U.S. appear to cite Wikipediamore often in their scholarly publications than authors in any other country. Researchers affiliated with Carnegie Mellon University, Georgia Institute of Technology, and Indiana University were most active in citing Wikipedia. The most highly citing affiliated institutions are rank ordered in Table 7. International researchers affiliated with universities in Asian countries — Nanyang Technological University, University of Hong Kong, Tsinghua University, and Chinese University of Hong Kong — cited Wikipedia most frequently. Nanyang Technological University, Carnegie Mellon University, Indiana University, and Tsinghua University were also listed among the 15 institutions which are most productive in Wikipediaresearch as well.
Table 7: Institutions whose researchers citeWikipedia most frequently. Institution Number of citations Carnegie Mellon University 23 Georgia Institute of Technology 19 Indiana University 17 Institute of Electrical and Electronics Engineers (IEEE) 16 Nanyang Technological University 15 University of Hong Kong 15 Purdue University 15 New York University 15 Tsinghua University 15 Chinese University of Hong Kong 14 Arizona State University 14 University of California, Berkeley 14 University of California, Los Angeles 14
Citations to Wikipedia by country
Table 8 lists the number of citations to Wikipedia by country. Researchers from the U.S., China, United Kingdom, Germany and Canada most frequently cite Wikipedia according toScopus while the U.S., United Kingdom, Canada, and Germany cite the most in the WoSdatabase. For the United Kingdom’s total, an additional four citations from Scotland were added. Likewise, a combined search with “affilcountry” (Hong Kong) brought an additional 42 citations by researchers affiliated with institutions in Hong Kong in Scopus which were added into China. Scholars in the U.S., Germany, United Kingdom, China, and France were most active in generating research on Wikipedia while researchers affiliated in the U.S., United Kingdom, Germany, and China cited Wikipedia most often. American scholars are strong in both Wikipedia research and citing Wikipedia in their publications. However, a closer look reveals that U.S. scholars are more likely to cite Wikipedia than to actually produce research on Wikipedia itself. American scholars account for about 37 percent of published research onWikipedia in WoS and 22 percent in Scopus whereas they produce 43 percent of the citations to Wikipedia in WoS and 27 percent in Scopus.
Table 8: Citations to Wikipedia by country. Country Numbers cited inWoS(percent) Country Numbers cited inScopus(percent) United States 146(43) United States 908(27) United Kingdom(including Scotland) 19(5.6) China(including Hong Kong) 212(6.3) Canada 18(5.3) United Kingdom 196(5.8) Germany 14(4.1) Germany 158(4.7) Australia 11(3.2) Canada 138(4.1) Singapore 9(2.7) Australia 116(3.5) China 8(2.4) France 78(2.3) Taiwan 8(2.4) Japan 75(2.2) Austria 7(2.1) Italy 63(1.9) France 7(2.1) Netherlands 62(1.9) Netherlands 7(2.1) Spain 46(1.4)
Scholarly publications citing Wikipedia most often
The publications in WoS and Scopus which most cite Wikipedia were identified and are rank ordered in Table 9. Among the 22 publications that produced the most research aboutWikipedia, four — namely Lecture Notes in Computer Science (with subseries), Proceedings of the International Symposium on Wikis, First Monday and Journal of the American Society for Information Science and Technology — also cited Wikipedia most frequently. Interestingly, inWoS the 10 most frequently citing publications contain about 12 percent of the total citations to Wikipedia while the 11 publications most active in producing Wikipedia research comprise about 20 percent of the publications about it. Likewise, in Scopus the 10 most highly citing publications contain only 12 percent of the relevant citations whereas the top 11 publications on Wikipedia research contain about 37 percent of pertinent publications. Wikipedia research is highly concentrated in a relatively few publications whereas citations to Wikipedia are scattered among a larger number of diverse publications in both WoS and Scopus. Thus,Wikipedia’s impact on scholarly communications appears to be stronger through citations to it rather than through publications about it.
Table 9: Publications in WoS and Scopus which citeWikipedia most often. Publications in WoS Number of publications(percent) Publications in Scopus Number of publications(percent) Lecture Notes in Computer Science 11(3.2) Lecture Notes in Computer Science, including the subseriesLecture Notes in Artificial Intelligence andLecture Notes in Bioinformatics 202(6) Journal of American Society for Information Science and Technology 6(1.8) Proceedings pf the International Symposium on Wikis 33(1) Publications of the Modern Language Association of America (PMLA) 5(1.5) Proceedings of SPIE (International Society for Optical Engineering) 30(<1) Computers & Security 4(1.2) Proceedings of ACM International Conference Series 28(<1) Explore: The Journal of Science and Healing 4(1.2) Conference Proceedings of the American Society for Engineering Education (ASEE) 26(<1) AAA — Arbeiten aus Anglistik und Amerikanistik 3(0.9) First Monday 21(<1) Journal of Universal Computer Science 3(0.9) Proceedings of the International Conference on Information and Knowledge Management 20(<1) Athletic Therapy Today 2(0.6) Journal of the American Society for Information Science and Technology 15(<1) Biochemistry and Molecular Biology Education 2(0.6) Communications in Computer and Information Science 14(<1) Clinical Orthopedics and Related Research 2(0.6) Proceedings of the ACM Conference on Human Factors in Computing Systems 13(<1)
Academic fields citing Wikipedia most often
Table 10 displays the 12 academic fields which cite Wikipedia the most often as noted in WoSand Scopus. About 16 percent of the citations to Wikipedia originate from computer science fields, about 10 percent from information and library science, about six percent from literature, and about four percent from communications and engineering in WoS. In Scopus, about 42 percent of citations come from computer science, 24 percent from engineering, and another 21 percent from the social sciences. The computer science field displays both the highest proportion of Wikipedia research and citations to Wikipedia. The fields of engineering (24 percent), and medicine (14 percent) are quite active in citing Wikipedia in their publications. In contrast, 14 percent of Wikipedia research derives from engineering, and three percent from medicine. Mathematicians contribute a larger proportion of the Wikipediaresearch (23 percent) than the citations to it (11 percent). The proportions are nearly equal for social scientists who produce 18 percent of the Wikipedia research and 21 percent of the citations. In the arts and humanities the proportion of citations to Wikipedia (about four percent) is also greater than the proportion of research publications about Wikipedia (about one percent). Remember that a publication may be assigned to more than one subject category so citation counts by fields may include duplicates.
Table 10: Academic fields citing Wikipedia most frequently. Academic fields identified inWoS Number of citations(percent) Academic fields identified inScopus Number of citations(percent) Information science and Library science 34(9.9) Computer science 1,419(42.5) Computer science, Information systems 27(7.9) Engineering 797(23.8) Literature 19(5.5) Social sciences 711(21.3) Computer science, theory and methods 17(4.9) Medicine 483(14.5) Communications 13(3.8) Mathematics 366(10.9) Engineering, electrical and electronic 13(3.8) Biochemistry, Genetics and Molecular biology 183(5.5) Computer science, software engineering 11(3.2) Arts and Humanities 149(4.5) Education and Education research 11(3.2) Business, Management and Accounting 139(4.2) Law 9(2.6) Physics and Astronomy 139(4.2) Humanities, Multidisciplinary 8(2.3) Material science 109(3.3) Language and Linguistics 8(2.3) Decision science 102(3.1) Languages 8(2.3) Nursing 83(2.5)
Wikipedia’s increasing visibility in scholarly communications
Scholarly research about Wikipedia apparently first appeared in the 3 June 2002 issue of First Monday, in paper entitled “Open source intelligence” by Felix Stalder and Jesse Hirsh  as well as in a 2002 article in Online entitled “Péter’s picks and pans review on Wikipedia” by Péter Jacsó . Table 11 summaries the pertinent data about Wikipedia in WoS and Scopusfrom 2002 to 2010. As the Table 11 illustrates, research about and citations to Wikipedia in scholarly publications have steadily increased over time since its launch in 2001. Although citations to Wikipedia in WoS peaked in 2007, there is substantial evidence in citation patterns to demonstrate the significant impact of Wikipedia on scholarly communication over the past decade, corresponding to its increased use as an information resource.
Table 11: Research about Wikipedia and citations to Wikipedia, by year. Year Number of research publications identified inScopus Number of citations identified in Scopus Number of research publications identified inWoS Number of citations identified in WoS 2002 2 0 2 1 2003 0 4 0 0 2004 3 39 0 10 2005 19 97 7 24 2006 80 303 22 70 2007 209 491 33 81 2008 340 592 65 57 2009 390 880 76 48 2010 412 933 86 49
Tables 12 and 13 present data about the types of publications that, respectively, write about and cite Wikipedia. Table 12 shows that in Scopus research about Wikipedia has been published predominantly in conference papers (63 percent), articles (26 percent), and review papers (three percent) among other formats, while in WoS it has been published more frequently in articles (65 percent), proceeding papers (nine percent), and editorial materials (seven percent). However, Wikipedia tends to be more highly cited in journal articles as shown in Table 13: 30 percent in Scopus and 70 percent in WoS. Only seven percent of the citations in WoS were to conference papers, contrasted to 31 percent in Scopus. In summary, the visibility of Wikipedia research is more prominent in conference and proceedings papers while citations to Wikipedia are more prevalent in journal articles.
Table 12: Types of publications publishing research about Wikipedia. Document type identified inWoS Number(percent) Document type identified inScopus Number(percent) Articles 188(65.0) Conference papers 921(63.2) Proceedings papers 25(8.7) Articles 385(26.4) Editorial material 21(7.2) Reviews 47(3.2) Book reviews 15(5.2) Conference reviews 46(3.1) Letters 11(3.8) News items 10(3.4) Other 21(7.2) Other 56(3.8)
Table 13: Types of publications citing Wikipedia. Document type identified inWoS Number(percent) Document type identified inScopus Number(percent) Articles 239(70.3) Conference papers 1,046(31.3) Editorial material 39(11.5) Articles 995(29.8) Proceedings papers 24(7.1) Reviews 258(7.7) Reviews 19(5.5) Editorial material 54(1.6) Book reviews 12(3.5) Short surveys 21(<1) Letters 6(1.7) Notes 17(<1)
Since Wikipedia was launched in 2001, the number of research publications about Wikipediaand citations to Wikipedia has increased steadily. There were a total of 1,746 publications included in WoS and Scopus for the years 2002 to 2010.
Research about Wikipedia has been published most frequently by individual researchers who are affiliated with academic institutions in Europe and Asian countries — Netherlands, Germany, Australia and Japan. However, the largest proportion of research on Wikipedia has been contributed by scholars in academic institutions in the U.S. (about 37 percent in WoSand 22 percent in Scopus), followed by scholars from Germany, United Kingdom, and China. Researchers in universities are the major contributors to Wikipedia research. The University of Amsterdam in the Netherlands and the Max–Planck–Institut für Informatik in Germany were the most active in producing research on Wikipedia. Analysis by discipline shows that the most frequent contributors to Wikipedia research are computer scientists, information scientists, and mathematicians. For example, the Lecture Notes in Computer Science (with subseries) and Proceedings of the International Symposium on Wikis (with variant titles) have published more Wikipedia research than any other publications. Conference publications and journal articles are the major venues for reporting research on Wikipedia.
Wikipedia’s citation rates in scholarly publications have been consistently increasing. It was cited 3,679 times in the WoS and Scopus databases during the last nine years. Academic institutions are not only the major producers of Wikipedia research but also the major consumers that cite Wikipedia most often. The rate of citing was highest among scholars from the US, United Kingdom, Germany, and China. Wikipedia has been cited in more than 30 countries and by 306 institutions worldwide in WoS alone. Authors affiliated with academic institutes in the U.S. appear to cite Wikipedia most frequently. American scholars tended to cite Wikipedia to a greater extent than they published research about it. Researchers affiliated with Carnegie Mellon University, Georgia Institute of Technology, and Indiana University were the most active in citing Wikipedia in their publications. Scholars in the fields of computer science, information science and social sciences are the most active in citingWikipedia. Interestingly, researchers in engineering and medicine cite more often than do research on Wikipedia, while researchers in mathematics more often write about Wikipediathan cite it. Arts and humanities also give more citations to Wikipedia than conduct research about it. Wikipedia research is most likely to be published in conference and proceedings papers, then journal articles along with other formats. However, citations to Wikipedia were more often found in journal articles followed by conference papers and then editorial materials. A few publications contain a high portion of Wikipedia research while citations were scattered in a wider range of publications. The breath of Wikipedia’s impact has stretched to authors in many fields and professional areas.
Reported numbers regarding the writing about and citing of Wikipedia should be taken carefully as they reflect only a snapshot provided by several databases. Since this research is based only on WoS and Scopus, publications included in these databases are mostly in English. Finally, book reviews, editorial material, letters, and news items (which constitute a significant portion of publications about Wikipedia in WoS) are not strictly speaking “research,” but they nevertheless are indicative of Wikipedia’s impact on the scholarly communication.
This research adds to our understanding of Wikipedia’s role in scholarship and reflects scholarly regard in some sense for a highly controversial yet well used resource on the Internet. This bibliometric study demonstrates Wikipedia’s visibility in the scholarly communication process — productivity of scholars, affiliated institutions, academic fields, and the geographic distribution of affiliated institutions, and the type of publications. The influence of Wikipedia on the scholarly community as indicated by citations was identified in the course of this research. Hence this paper sheds some light on trends regarding Wikipedia’s place in formal scholarship and demonstrates its growing visibility.
Recent involvement by higher education communities in Wikipedia implies Wikipedia’spotential to become not only a reliable resource but also a learning and teaching tool for students. Wikipedia’s plans to include more women and elderly as well as expanding international offices will bring balance and wholeness in content. As demonstrated in this study, active research on Wikipedia and citations to Wikipedia testifies to Wikipedia position as a rich resource. The increasing scholarly attention to Wikipedia suggests a growing acceptance of its credibility as a valid information resource.
This study is only a small step in demonstrating the visibility of Wikipedia in scholarly communication. Identifying major topics covered in scholarly publications about Wikipediamay be addressed in future research. Other issues — such as examining gender differences, co–author networks in Wikipedia research, and motivations for citing Wikipedia — could add further details on the utility of Wikipedia in scholarship.
About the author
Taemin Kim Park, Ph.D., is an Associate Librarian of Indiana University Libraries and Adjunct Faculty in the School of Library and Information Science at Indiana University, Bloomington.
This research was conducted during the author’s research leave which was partially supported by Indiana University Libraries.
8/8/2011 I think that we all have to agree that books, journals and other printed matter have been changed forever. Borders and many Bookshops have declared bankruptcy and it’s not due solely to online purchasing but rather the creation of the e-book now available in many formats. Will “the book” as we know it continue to exist? Only time will tell. Personally, I think that printed matter still has a place in our society. Although I love my android and it is very convenient, there is still nothing like holding a brand new book, smelling its pages and being able to turn them physically. It is so much easier to go back in a printed book to check on a detail you have forgotten or missed and who a character is related to. In a website called
there is an article interviewing Bob Stein titled
“The Social Context of Reading: Five Questions for Bob Stein” representing The Institute for the Future of the Book
by BUZZ POOLE · 6 COMMENTS
I first learned about The Institute for the Future of the Book while working on a magazine assignment that eventually became this piece for The Millions. In getting to know Bob Stein, his colleagues and the projects they championed I became convinced that concerns about the death of reading and writing were deeply misplaced. What readers, writers, publishers and retailers really needed to worry about, and catch up with, was the increasing potential of what a book’s content could be, the delivery of the content and how we could interact with the content. Of course, plenty has changed in the intervening years and the Institute continues to instigate the exploration of ideas regarding the future of the book. I caught up with Stein over the phone for his take on today’s culture of reading.
The first time we spoke was back in the pre-Kindle, pre-iPad days of 2006. For decades, you’ve actively been thinking about and working to augment the future of the book. What is your read of how the concept of the book has changed in the past five years?
There is the question of how it has involved in my mind and how it has evolved in the minds of the public. I view the book as a place where readers congregate and the social aspect of reading is where we’re going. The publishing industry is trying very hard to keep the traditional model of a book in tact, selling 300 or 400 pages to one reader at a time.
There is this social aspect: books are becoming these places to congregate, the form of expression is undergoing changes. In most cases e-readers and the e-book developers haven’t caught up to this. There are concepts that are too far afield, like people trying to write a novel collaboratively in World of Warcraft. I have no problem with such a book being considered fiction just like Tolkien but the execution isn’t there. And then there is something like Push Pop Press. Yes, the Al Gore book has interactive media but it is just for one reader at a time. They are simply books with audio and video on the page. We figured that out long, long ago. And it isn’t sustainable. When you’re doing something for the first time you can beg, borrow and steal all sorts of help when it comes to all this content. But when you go back to do it again and again you have to pay up.
My big problem with these apps is that they are like CD-ROMs, in the wrong sense – both are islands unto themselves. I’m reading The Waste Land all by myself. The apps are all walled in, all you have is the app. Heaven forbid you have an idea and want to go down the rabbit hole. You can, but you have to leave the app. I believe we are heading toward browser-based materials.
In light of how quickly e-readers have evolved do you foresee a time when printed books truly are a thing of the past? Will there be a time when e-readers will be able to compete with the most lavishly produced art book?
Yes and no. The reality is we’re always going to have books but they are going to play a different role in culture. There will be collectible, expensive art books and books as objects. Rich people will be able to have expensive art objects but in terms of how most information will be moved around it will be electronic. Books will be beautiful objects, the same as when I’m in an antique store and buy a salt shaker – I buy the object, a unity of form.
When you talk about the future of the book, you are really talking about the nature of how content is generated and engaged, right? Is one of the greatest potentials for the future of the book that this fluid, democratized notion of a book’s content will make for more transparency, especially in the ivory towers of the academy?
Let’s look at it differently. Think of going to history class as a kid, fifty years ago, fifteen years ago, it doesn’t matter. The teacher gave you a book and the first impression you were given is, Here is truth. But we’ve developed a much more sophisticated understanding of truth – it is something each one of us constructs from various perspectives. In the future we won’t be as interested in one person’s synthesis. Transparency is part of that but it is about coming at problems from different perspectives. My biggest thing moving forward is how we exploit this potential.
What is your ideal, your utopia, for the future of the book?
I’ve become interested in how context informs the reading experience, whereas a few years ago I was more focused on content. I’m interested by how context comes from different places, how it is shaped by different factors. During The Golden Notebook Project [a late 2008 “experiment in close-reading” that featured an ongoing conversation between seven readers that took place in the margins of the novel] I learned a huge amount just watching them read and debate the text. You can bring in various different glosses on a document. It is a richer experience with these different framing devices readily available, being able to see multiple perspectives and points of view at once. In the digital era context is what matters.
What was the last codex book your read? The last e-book?
The last codex book was Edmund Morris’s biography of Beethoven. The last e-books were New Culture of Learning by John Seely Brown and Doug Thomas and Gary Shteyngart’s Super Sad True Love Story.