The “Fair Copyright in Research Works” Act rears its ugly head again

I was disappointed to learn yesterday that Congressman John Conyers (D-MI) reintroduced the “Fair Copyright in Research Works” Act despite the fact that it is neither fair nor supportive of research. As Paul Courant put it in his blog post about it the first time around, “the Fair Copyright in Research Works Act is a lot of things, but fair ain’t one of them.”

The bill is a direct response to the NIH Public Access Policy; it would prohibit any policy requiring a copyright transfer or license from federal grantees, making the current NIH policy illegal. Publishers are afraid that mandated public access to federally funded research would hurt their profit margins, and this bill is basically a gift from Conyers to Springer, Elsevier, and the AAP. Meanwhile, it contravenes everything President Obama has said about increasing openness in government, not to mention improving access to information, strengthening our education system, and “restor[ing] science to its rightful place and wield[ing] technology’s wonders to raise health care’s quality and lower its cost.” American citizens pay a lot of money for research; this bill would ensure that the vast majority of us will never see the results of that research.

This is not nearly as big or headline-worthy as the colossal banking bailout, but the spirit is the same: Use taxpayer money to save a private industry from its own failings. The big STM publishers are clinging to a dying business model, and nothing Congress does will save them if they don’t get with the program and stop fearing the giant copy machine that is the Internet. Blah blah, we know this already.

Well, the bill failed once. Here’s hoping it fails again.

RIAA will stop suing fans. A nation wonders what took them so long.

Good news for a snowbound morning: The RIAA has announced that it will not file any more lawsuits against alleged music pirates.

From the Wall Street Journal:

The decision represents an abrupt shift of strategy for the industry, which has opened legal proceedings against about 35,000 people since 2003. Critics say the legal offensive ultimately did little to stem the tide of illegally downloaded music. And it created a public-relations disaster for the industry, whose lawsuits targeted, among others, several single mothers, a dead person and a 13-year-old girl.

Instead, the Recording Industry Association of America said it plans to try an approach that relies on the cooperation of Internet-service providers. The trade group said it has hashed out preliminary agreements with major ISPs under which it will send an email to the provider when it finds a provider’s customers making music available online for others to take.

The RIAA plans to finish pursuing all currently active lawsuits – so Jammie Thomas isn’t safe, yet – but when the last one is over we will finally be able to close the book on one of the most absurd chapters in American copyright history.

I have to say, while the end of these lawsuits is wonderful news, especially for my employer and other universities and colleges across the country, a part of me is a little sad. The story of the crazily misguided RIAA suing its customers (and dead people) has become one of the mainstays of my copyright lectures. With the RIAA finally approaching sanity, I lose one of my favorite villains.

Hat tip to Fred Benenson via Twitter.

Update 12/21/08, 8:14 am EST: It turns out that the WSJ article was misleading about the RIAA’s phasing out of lawsuits. It claims the organization stopped filing lawsuits “early this fall”, but Ray Beckerman has uncovered suits filed as recently as last Monday. It’s unclear whether the WSJ was misleading on purpose, or simply misled by RIAA spokespeople. So nevermind, for now. Our villain remains villainous.

On “Becoming Screen Literate” by Kevin Kelly

In last week’s Screens Issue of the New York Times Magazine, Kevin Kelly had a long article called “Becoming Screen Literate.” I first became aware of Kevin Kelly and his greatness when I read another of his NYT Magazine articles in May 2006, Scan This Book, about the impact of mass digitization on the future of the book, and I’ve been following his work ever since.

I loved “Scan This Book” because of its optimistic and utopian vision for the future of books in a world of networked bits. Not only did Kelly write favorably of libraries’ participation in Google’s scanning project (for which I am totally in the tank), but he imagined a highly appealing textual landscape in which everything is flexible, linkable, and infinitely copyable. Most compellingly, the article tied Kelly’s fantastic potential future to the legal and economic challenges of the present; he called the indefinite extension of copyright terms “perverse” and titled a section “When Business Models Collide.” The article was both dreamy and grounded. Seriously, you should read it.

When I saw Kelly’s latest article, I hoped that it would do for screens what the previous one had done for pages: frame the astounding potential for a technology in the limitations of the present. I did not quite get what I was looking for. The article is great, and worth reading, but it is a more purely futuristic work.

The basic premise is that new tools will soon make it possible for many people to develop a “screen literacy” that maps very closely to textual literacy, where textual literacy is the ability of a user “to cut and paste ideas, annotate them with her own thoughts, link them to related ideas, search through vast libraries of work, browse subjects quickly, resequence texts, refind material, quote experts and sample bits of beloved artists.”

Kelly argues that “Literacy… required a long list of innovations and techniques that permit ordinary readers and writers to manipulate text in ways that make it useful,” and now new innovations and techniques will permit ordinary people to do the same with moving images.

If text literacy meant being able to parse and manipulate texts, then the new screen fluency means being able to parse and manipulate moving images with the same ease… It took several hundred years for the consumer tools of text literacy to crystallize after the invention of printing, but the first visual-literacy tools are already emerging in research labs and on the margins of digital culture.

These tools will the remix and the mashup to a whole new level, to the point where we can index, reference, and annotate moving images without resorting to screen shots. Neat, right? But then Kelly loses me. He goes on:

The holy grail of visuality is to search the library of all movies the way Google can search the Web. Everyone is waiting for a tool that would allow them to type key terms, say “bicycle + dog,” which would retrieve scenes in any film featuring a dog and a bicycle.

At which point I become irretrievably distracted from the coming screen literacy, and focus on something else entirely. Why? Because there are approximately 3,500 words in this article, and none of them are “copyright,” “intellectual property,” or “lawsuit.” It’s lovely to talk about the technical challenge of building a visual search engine for a universal library of moving images, but how can you not mention that whoever builds it is likely to face exactly the same kind of legal challenges that Google’s book scanning project did? Worse, really, because book publishers are newcomers to the “sue your fans” business model, while the movie industry has been fiercely litigious since birth.

I understand that maybe Kelly didn’t want to retread old ground, and that harping on problems with copyright law gets boring after awhile, but it’s hard for me to believe in an a vision of a screen-fluent future that doesn’t take into account the battles we’ll have to fight to get there. The vast majority of our visual culture is copyrighted; unless and until we figure out how to fix copyright law, the vibrant creative world Kelly describes will always be in danger of death-by-takedown notice.

UM receives grant for Copyright Review Management System

This is local news for me, but exciting and important on a national level (at least I like to think so).

The University of Michigan Library was just awarded a grant for over half a million dollars from the Institute of Museum and Library Services, to develop a copyright review management system which will improve the reliability of copyright status determinations.

Here are the details:

The University of Michigan Library will create a Copyright Review Management System (CRMS) to increase the reliability of copyright status determinations of books published in the United States from 1923 to 1963, and to help create a point of collaboration for other institutions. The system will aid in the process of making vast numbers of these books available online to the general public. Nearly half a million books were published in the United States between 1923 and 1963, and although many of these are likely to be in the public domain, individuals must manually check their copyright status. If a work is not in the public domain, it cannot be made accessible online. The CRMS will allow users to verify if the copyright status has been determined.

The project was inspired by the work that the Michigan Library is already doing to determine the copyright status of the thousands of books published between 1923 and 1963 that Google has digitized from our collections. Books published during that period are in the public domain if their copyrights were not renewed or if proper copyright notice was not included in the publication. Most digitization projects, including Google’s, block access to all books published after 1922 because their copyright status is unknown and difficult to determine. Michigan has a workflow in place that uses copyright renewal records and page images from the books to research the copyright status of those works, and to open up access to the ones that turn out to be in the public domain.

The Copyright Review Management System will build on this work, and support efficient collaboration among institutions. It joins OCLC’s new Copyright Evidence Registry in the growing field of collaborative library copyright determination projects. My understanding is that Michigan is already sharing data with OCLC, and presumably our collaborators will as well. It’s nice when collaborative projects collaborate with each other.

Michigan’s project will raise the impact and usefulness of mass digitization projects by drastically increasing the number of digitized works that libraries can safely share with the public. In the absence of a reasonable orphan works bill, or even, dare I say it, some much-needed improvements in copyright law, it’s great to see libraries working to expand the known public domain and squeeze every last usable work from their massively digitized stacks.

OCLC’s new Copyright Evidence Registry

OCLC has launched the WorldCat Copyright Evidence Registry.

From the press release:

The WorldCat Copyright Evidence Registry is a community working together to build a union catalog of copyright evidence based on WorldCat, which contains more than 100 million bibliographic records describing items held in thousands of libraries worldwide. In addition to the WorldCat metadata, the Copyright Evidence Registry uses other data contributed by libraries and other organizations…

“Having a practical registry of copyright evidence is vital to our objective of providing our scholars and students with more digital content, one goal of Stanford’s mass digitization projects,” said Catherine Tierney, Associate University Librarian for Technical Services, Stanford University. “By leveraging the value of its massive database, OCLC is in a unique position to champion cooperative efforts to collect evidence crucial to determining copyright status.”

It’s good that OCLC is creating a copyright status registry. A well-populated registry, by and for librarians, with good and useful metadata, could eventually save users real time and money. Currently, we have a handful of institutions doing major digitization projects that are separately investigating copyright status on a large scale. It’s inefficient, with lots duplicated effort. Copyright evidence is exactly the kind of thing on which libraries can and should be collaborating, and OCLC seems like a logical organization to take the lead.

I do have a couple of questions/concerns.

  1. OCLC claims and enforces copyrights in its bibliographic records. While it grants member libraries permission to make broad use of those records, my understanding is that the same is not true for non-members. If OCLC extends that policy to the Copyright Evidence Registry, it risks becoming just another walled garden that is useful only to a select (and paying) group of members, and less useful even to that group than it would be if it were truly open.
  2. Right now the registry is sparsely populated. It will take a critical mass of records and contributors to become a trustworthy source of copyright evidence. Where will that critical mass come from? What is OCLC doing to build it quickly? How will users know when the registry has reached it?

Via Digital Koans.