Creative Commons launches a study of NC licenses

Good news coming out of the Creative Commons offices today.

From the press release:

The nonprofit organization Creative Commons has launched a research study that will explore differences between commercial and noncommercial uses of content, as those uses are understood among various communities and in connection with a wide variety of content. Generous support for the study has been provided by the Andrew W. Mellon Foundation.

“The study has direct relevance to Creative Commons’ mission of providing free, flexible copyright licenses that are easy to understand and simple to use,” said Creative Commons CEO Joi Ito. “The
NC term is a popular option for creators choosing a Creative Commons license, and that tells us the term meets a need. However, as exponentially increasing numbers of works are made available under CC licenses, we want to provide additional information for creators about the contexts in which the NC term may further or impede their intentions with respect to the works they choose to share, and we want to make sure that users clearly understand those intentions. We expect the study findings will help us do a better job of explaining the licenses and to improve them, where possible. We also hope the findings, which will be made publicly available, will contribute to better understanding of some of the complexities of digital distribution of content.”

Research is expected to be completed early in 2009. The study will investigate understanding of noncommercial use and the Creative Commons NC license term through a random sample survey of online content creators in the U.S., a poll of the global Creative Commons community, and qualitative data gathered from interviews with thought leaders and focus groups with participants from around the world who create and use a wide variety of content and media.

This is an important project for CC. As I learned firsthand when I tried to create a HOWTO for using NonCommercial-licensed works, the definition of NC is really confusing for a lot of people. There are a lot of fine distinctions to be made, and creators’ understanding of what rights they’re granting can be very different from users’ interpretations of what they’ve been given. It’s impossible to pin down exactly what the license means without reading the Legal Code, but the whole point of Creative Commons is to make it so that regular humans don’t have to bother with that stuff.

It’s a problem for all the CC licenses. I think the Share Alike requirement is actually more confusing than NonCommercial, but there are plenty of people on the cc-community list who disagree with me. There’s no question that a rigorous study of the NonCommercial license, including why creators choose it, what they hope to accomplish, and what does and doesn’t work for them, would be incredibly useful. It may even lead to a whole different approach to the Human Readable language.

Creative Commons began as an experiment, and now that it has grown into a vast international movement I think it’s worth re-evaluating some of the organization’s initial strategies. Offering the simplest possible language to describe a copyright license is an appealing and democratic ideal, but I’m not sure if it’s working. In fact, I wouldn’t be surprised if this study found that the human readable Commons deeds, which are currently just a few lines of text, need to be more like a paragraph or three if they are to capture all the nuances of the legal code. Some very important details – like the fact that a video that uses an SA-licensed song is considered a derivative work under the license even thought it’s not a derivative work in the generally accepted definition of a derivative work – get lost when the explanations are kept to 25 words or less.

Congratulations to Creative Commons, and good on Mellon for funding the project. I’ll be following its progress with great interest.

UM receives grant for Copyright Review Management System

This is local news for me, but exciting and important on a national level (at least I like to think so).

The University of Michigan Library was just awarded a grant for over half a million dollars from the Institute of Museum and Library Services, to develop a copyright review management system which will improve the reliability of copyright status determinations.

Here are the details:

The University of Michigan Library will create a Copyright Review Management System (CRMS) to increase the reliability of copyright status determinations of books published in the United States from 1923 to 1963, and to help create a point of collaboration for other institutions. The system will aid in the process of making vast numbers of these books available online to the general public. Nearly half a million books were published in the United States between 1923 and 1963, and although many of these are likely to be in the public domain, individuals must manually check their copyright status. If a work is not in the public domain, it cannot be made accessible online. The CRMS will allow users to verify if the copyright status has been determined.

The project was inspired by the work that the Michigan Library is already doing to determine the copyright status of the thousands of books published between 1923 and 1963 that Google has digitized from our collections. Books published during that period are in the public domain if their copyrights were not renewed or if proper copyright notice was not included in the publication. Most digitization projects, including Google’s, block access to all books published after 1922 because their copyright status is unknown and difficult to determine. Michigan has a workflow in place that uses copyright renewal records and page images from the books to research the copyright status of those works, and to open up access to the ones that turn out to be in the public domain.

The Copyright Review Management System will build on this work, and support efficient collaboration among institutions. It joins OCLC’s new Copyright Evidence Registry in the growing field of collaborative library copyright determination projects. My understanding is that Michigan is already sharing data with OCLC, and presumably our collaborators will as well. It’s nice when collaborative projects collaborate with each other.

Michigan’s project will raise the impact and usefulness of mass digitization projects by drastically increasing the number of digitized works that libraries can safely share with the public. In the absence of a reasonable orphan works bill, or even, dare I say it, some much-needed improvements in copyright law, it’s great to see libraries working to expand the known public domain and squeeze every last usable work from their massively digitized stacks.