You are browsing the archive for External.

Interview with Hugh McGuire, Founder of Librivox.org

Jonathan Gray - October 7, 2010 in Exemplars, External, Featured Project, Interviews, Public Domain, WG Public Domain, Working Groups

Following is an interview with Hugh McGuire, Founder of the Librivox project and member of the Open Knowledge Foundation’s Working Group on the Public Domain.



Could you tell us a bit about the project and its background? Why did you start it? When? What was the need at the time?

There were some philosophical reasons, and some practical reasons for the creation of LibriVox, which “launched” in August 2005. On the philosophical side, I was fascinated by Richard Stallman and the free software movement, both in methodology and in ethic. I was equally excited by Lessig’s work with the Creative Commons movement and the idea of protecting public domain, including projects such as Michael Hart’s Project Gutenberg. Brewster Kahle’s vision at the Internet Archive of Universal Access to All Human Knowledge was another piece of the puzzle, as was Wikipedia, the most visible non-software open source project around at the time. Finally blogging and podcasting revealed the possibility that anyone could make media and deliver it to the world. It was a potent cocktail.

On the practical side, I was going on a long drive, and wanted to download some free audiobooks – there weren’t very many to be found – and it seemed to me an open source project to make some would be an exciting application of all that stuff I’d been thinking of above.

How is the project doing now? Any numbers on contributors, files, etc? Wider coverage and exposure?

It’s clicking along. We put out about 100 books a month now. Here are our latest stats:

  • Total number of projects 4342
  • Number of completed projects 3768
  • Number of completed non-English projects 551
  • Total number of languages 32
  • Number of languages with a completed work 29
  • Number of completed solo projects 1716
  • Number of readers 3975…who have completed something 3772
  • Total recorded time: 78850563 seconds, or 2 years, 182 days, 3 hours, 18 minutes, and 31 seconds. Total of 78438 sections.

What are the synergies with other projects/inititatives like Project Gutenberg, Wikimedia Foundation projects, Internet Archive and suchlike?

Project Gutenberg provides the bulk of the texts we work from, and they do all the legal work to make sure the texts are in the public domain. They’ve given us some financial support over the years to pay some server costs. And they also have started hosting some of our audiobooks.

Internet Archive hosts all our audio, and when we need a legal entity to represent us – for instance when we launched our first, brief funding drive this spring – IA helps out.

We’ve never had much connection with the Wikimedia Foundation, though we’ve talked with them over the years of course.

Can users request audio versions of particular texts?

Yes, but that doesn’t guarantee that anyone will want to record them.

What are your current plans for languages other than English?

To record all public domain books in all languages in the universe.

Any interesting stories about Librivox content? Coincidences, anecdotes or interesting reuses of the material?

Eegs. Well, some LibriVox cover art was used in a Blackberry commercial. The explosion & popularity of mobile apps – iPhone/Android – built on the LibriVox catalog has been the most gratifying. And we’re starting to see new websites built on our catalog too … it’s exciting, and demonstrates the value of open APIs:

How can people help out? Are there any particular types of assistance or expertise you are currently seeking?

Mostly: reading and prooflistening.

I understand you are personally interested in open content, open data and the public domain. Do you currently have any plans for other projects in this area?

Hrm. I’m mostly focused on book publishing these days, and I’m trying do things in the publishing industry that push towards a more open approach to content.

Can you give a sense of what you hope this area will look like in the future? E.g. in ten or twenty years time? Any thoughts about the future of delivering and reusing public domain content? New opportunities?

Well one thing I would like to see is the public domain expanding again in the USA. The current approach to copyright — essentially extension after extension so that nothing new ever goes into the public domain — is very depressing. But I think the tension between this desire to keep things locked up, and the unprecedented ability to do things with books, media, data is a great debate. I have to think that in the end the value of using data & media in new ways will outwiegh the desire to create false scarcity, but there’s lots of struggle yet to make this happen, and to figure out what businesses look like in such an environment.

In short – we live in interesting times.

Related posts:

  1. Interview with European Journalism Centre on Data Driven Journalism
  2. Notes from Workshop on Open Bibliographic Data and the Public Domain
  3. Interview with Rufus Pollock for Guardian Activate event

Opening up university infrastructure data

Jonathan Gray - July 20, 2010 in External, Guest post, Open Data, Public Domain

The following guest post is from Christopher Gutteridge, Web Projects Manager at the Electronics and Computer Science (ECS), University of Southampton and member of the OKF’s Working Group on Open Bibliographic Data.

We announced on Tuesday (13th July 2010) that all the RDF made available about our school would be placed in the public domain.

Around five years ago we (The School of Electronics and Computer Science, University of Southampton) had a project to create open data of our infrastructure data. This included staff, teaching modules, research groups, seminars and projects. This year we have been overhauling the site based on what we’ve learned in the interim. We made plenty of mistakes, but that’s fine and what being a university is all about. We’ll continue to blog about what we’ve learned.

We have formally added a “CC0″ public domain license to all our infrastructure RDF data, such as staff contact details, research groups and publication lists. One reason few people took an interest in working with our data is that we didn’t explicitly say what was and wasn’t OK, and people are disinclined to build anything on top of data which they have no explicit permission to use. Most people want to instinctively preserve some rights over their data, but we can see no value in restricting what this data can be used for. Restricting commercial use is not helpful and restricting derivative works of data is non-sensical!

Here’s an Example; Someone is building a website to list academics by their research area and they use our data to add our staff to this. How does it benefit us to force them to attribute our data to us? They are already assisting us by making our staff and pages more discoverable, why would we want to provide a restriction?. If they want to build a service that compiles and republishes data they would need to track every license and that’s going to be a bother of a similar scale to the original BSD Clause 3.

Our attitude is that we’d like an attribution where convenient, but not if it’s a bother. must-attribute is a legal requirement, we say “please-attribute”. It’s our hope that this step will help other similar organisations take the same step with the confidence of not being the first to do so.

The CC0 license does not currently extend to our research publications documents (just the metadata) or to research data. It is my personal view that research funders should make it a requirement of funding that a project publishes all data produced, in open formats, along with any custom software used to produce it, or required to process it, along with the source and (ideally) the complete cvs/git/svn history. This is beyond the scope of what we’ve done recently in ECS, but the University is taking the management of research data very seriously and it is my hope that this will result in more openness.

Another mistake we have learned from is that we made a huge effort to correctly model and describe our data as semantically accurately as possible. Nobody cares enough about our data to explain to their tool what an “ECS Person” is. We’re in the process of adding in the more generic schemes like FOAF and SIOC etc. The awesome thing about the RDF format is that we can do this gently and incrementally. So now everybody is both (is rdf:type of) a ecs:Person and a foaf:Person. (example). The process of making this more generic will continue for a while, and we may eventually expire most of the extraneous ecs:xyz site-specific relationships except where no better ones exist.

The key turning point for us was when we started trying to us this data to solve our own problems. We frequently build websites for projects and research groups and these want views on staff, projects, publications etc. Currently this is done with an SQL connection to the database and we hope the postgrad running the site doesn’t make any cock-ups which result in data being made public which should not have been. We’ve never had any (major) problems with this approach, but we think that loading all our RDF data into a SPARQL server (like an SQL server, but for RDF data and connects with HTTP) is a better approach. The SPARQL server only contains information we are making public so the risks of leaks (eg. staff who’ve not given formal permission to appear on our website) is minimised. We’ve taken our first faltering steps and discovered immediately that our data sucked (well, wasn’t as useful as we’d imagined). We’d modelled it with an eye to accuracy, not usefulness, believing if you build it they will come. The process of “eating our own dogfood” rapidly revealed many typos, and poor design decisions which had not come to light in the previous 4 or 5 years!

Currently we’re also thinking about what the best “boilerplate” data is to put in each document. Again, we’re now thinking about how to make it useful to other people rather than how to accurately model things.

There’s no definitive guidance on this. I’m interested to hear from people who wish to consume data like this to tell us what they *need* to be told, rather than what we want to tell them. Currently we’ve probably got an overkilll!

cc:attributionName “University of Southampton, School of Electronics and Computer Science”
dc11:description “This rdf document contains information about a person in the Department of Electronics and Computer Science at the University of Southampton.”
dc11:title “Southampton ECS People: Professor Nigel R Shadbolt”
dc:created “2010-07-17T10:01:14Z”
rdfs:comment “This data is freely available, but you may attribute it if you wish.→→If you’re using this data, we’d love to hear about it at webmaster@ecs.soton.ac.uk.”
rdfs:label “Southampton ECS People: Professor Nigel R Shadbolt”

One field I believe should be standard which we don’t have is where to send corrections to. Some of the data.gov.uk is out of date and an instruction on how to correct it would be nice and benefit everyone.

At the same time we have started making our research publication metadata available as RDF, also CC0, via our EPrints server. It helps that I’m also lead developer for
the EPrints project! By default any site upgrading to EPrints 3.2.1 or later will get linked data being made available automatically (albeit, with an unspecified license).

Now let me tell you how open linked data can save a university time and money!

Scenario: The university cartography department provides open data in RDF form describing every building, it’s GPS coordinates and it’s ID number. (I was able to create such a file for 61 university buildings in less than an hours work. It is already freely published on maps on our website so no big deal making it available.

The university teaching support team maintain a database of learning spaces, and the features they contain (projectors, seating layout, capacity etc.) and what building each one is in. They use the same identifier (URI) for buildings as the cartography dept. but don’t even need to talk to them, as the scheme is very simple. Let’s say:
http://data.exampleuniversity.ac.uk/location/building/23

Each team undertakes to keep their bit up to date, which is basically work they were doing anyway. They source any of their systems from this data so there’s only one place to maintain it. They maintain it in whatever form works for them (SQL, raw RDF, textfile, Excel file in a shared directory!) and data.exampleuniversity.ac.uk knows how to get at this and provide it in well formed RDF.

The timetabling team wants to build a service to allow lecturers and students to search for empty rooms with certain features, near where they are now. (This is a genuine request made of our Timetable team at Southampton that they would like a solution for)

The coder tasked with this gets the list of empty rooms from the timetabling team, possibly this won’t be open data, but it still uses the same room IDs (URIs). eg. http://data.exampleuniversity.ac.uk/location/building/23/room/101

She can then mash this up with the learning-space data and the building location data to build a search to show empty rooms, filtered by required feature(s). She could even take the building you’re currently in and sort the results by distance away from you. The key thing is that she doesn’t have to recreate any existing data, and as the data is open she doesn’t need to jump through any hoops to get it. She may wish to register her use so that she’s informed of any planed outages or changes to the data she’s using but that’s about it. She has to do no additional maintenance as the data is being sourced directly from the owners. You could do all this with SQL, but this approach allows
people to use the data with confidence without having to get a bunch of senior managers to agree a business case. An academic from another university, running a conference at exampleuniversity can use the same information without having to navigate any of the politics and bureaucracy and improve their conference sites value to delegates by joining each session to it’s accurate location. If they make the conference programme into linked data (see http://programme.ecs.soton.ac.uk/ for my work in this area!) then a 3rd party could develop an iPhone app to mash up the programme & university building location datasets and help delegates navigate.

But the key thing is that making your information machine readable, discoverable and openly licensed is of most value to your own members in an organisation. It stops duplication of work and reduces time wasted trying to get a copy of data other staff maintain.

“If HP knew what HP knows, we’d be three times more profitable.” – Hewlett-Packard Chairman and CEO Lew Platt

I’ve been working on a mindmap to brainstorm every potential entity a university may eventually want to identify with a URI. Many of these would benefit from open data. Please contact me if you’ve got ones to add! It would be potentially useful to start recommending styles for URIs for things like rooms, courses and seminars as most of our data will be of a similar shape, and it makes things easier if we can avoid needless inconsistency!

Related posts:

  1. 8.4 Million Grant to University of Manchester to Expand Semi-Open Data Repository
  2. Opening up linguistic data at the American National Corpus
  3. Cornell University Library keeps reproductions of public domain works in the public domain

Public Domain Calculators at Europeana

Jonathan Gray - May 12, 2010 in COMMUNIA, External, Guest post, OKF, OKF Projects, Public Domain, Public Domain Works, Technical, WG Public Domain, Working Groups

The following guest post is from Christina Angelopoulos at the Institute for Information Law (IViR) and Maarten Zeinstra at Nederland Kennisland who are working on building a series of Public Domain Calculators as part of the Europeana project. Both are also members of the Open Knowledge Foundation’s Working Group on the Public Domain.

Europeana Logo

Over the past few months the Institute for Information Law (IViR) of the University of Amsterdam and Nederland Kennisland have been collaborating on the preparation of a set of six Public Domain Helper Tools as part of the EuropeanConnect project. The Tools are intended to assist Europeana data providers in the determination of whether or not a certain work or other subject matter vested with copyright or neighbouring rights (related rights) has fallen into the public domain and can therefore be freely copied or re-used, through functioning as a simple interface between the user and the often complex set of national rules governing the term of protection. The issue is of significance for Europeana, as contributing organisations will be expected to clearly mark the material in their collection as being in the public domain, through the attachment of a Europeana Public Domain Licence, whenever possible.

The Tools are based on six National Flowcharts (Decisions Trees) built by IViR on the basis of research into the duration of the protection of subject matter in which copyright or neighbouring rights subsist in six European jurisdictions (the Czech Republic, France, Italy, the Netherlands, Spain and the United Kingdom). By means of a series of simple yes-or-no questions, the Flowcharts are intended to guide the user through all important issues relevant to the determination of the public domain status of a given item.

Researching Copyright Law

The first step in the construction of the flowcharts was the careful study of EU Term Directive. The Directive attempts the harmonisation of rules on the term of protection of copyright and neighbouring rights across the board of EU Member States. The rules of the Directive were integrated by IViR into a set of Generic Skeleton European Flowcharts. Given the essential role that the Term Directive has played in shaping national laws on the duration of protection, these generic charts functioned as the prototype for the six National Flowcharts. An initial version of the Generic European Flowchart, as well as the National Flowcharts for the Netherlands and the United Kingdom, was put together with the help of the Open Knowledge Foundation at a Communia workshop in November 2009.

Further information necessary for the refinement of these charts as well as the assembly of the remaining four National Flowcharts was collected either through the collaboration of National Legal Experts contacted by IViR (Czech Republic, Italy and Spain) or independently through IViR’s in-house expertise (EU, France, the Netherlands and the UK).

Both the Generic European Flowcharts and the National Flowcharts have been split into two categories: one dedicated to the rules governing the duration of copyright and the sui generis database right and one dedicated to the rules governing neighbouring rights. Although this division was made for the sake of usability and in accordance with the different subject matter of these categories of rights (works of copyright and unoriginal databases on the one hand and performances, phonograms, films and broadcasts on the other), the two types of flowcharts are intended to be viewed as connected and should be applied jointly if a comprehensive conclusion as to the public domain status of an examined item is to be reached (in fact the final conclusion in each directs the user to the application of the other). This is due to the fact that, although the protected subject matter of these two categories of rights differs, they may not be entirely unrelated. For example, it does not suffice to examine whether the rights of the author of a musical work have expired; it may also be necessary to investigate whether the rights of the performer of the work or of the producer of the phonogram onto which the work has been fixated have also expired, in order to reach an accurate conclusion as to whether or not a certain item in a collection may be copied or re-used.

Legal Complexities

A variety of legal complexities surfaced during the research into the topic. Condensing the complex rules that govern the term of protection in the examined jurisdictions into a user-friendly tool presented a substantial challenge. One of the most perplexing issues was that of the first question to be asked. Rather than engage in complicated descriptions of the scope of the subject matter protected by copyright and related rights, IViR decided to avoid this can of worms. Instead, the flowchart’s starting point is provided by the question “is the work an unoriginal database?” However, this solution seems unsatisfactory and further thought is being put into an alternative approach.

Other difficult legal issues stumbled upon include the following:

  • Term of protection vis-à-vis third countries
  • Term of protection of works of joint authorship and collective works
  • The term of protection (or lack thereof) for moral rights
  • Application of new terms and transitional provisions
  • Copyright protection of critical and scientific publications and of non-original photographs
  • Copyright protection of official acts of public authorities and other works of public origins (e.g. legislative texts, political speeches, works of traditional folklore)
  • Copyright protection of translations, adaptations and typographical arrangements
  • Copyright protection of computer-generated works

On the national level, areas of uncertainty related to such matters as the British provisions on the protection of films (no distinction is made under British law between the audiovisual or cinematographic work and its first fixation, contrary to the system applied on the EU level) or exceptional extensions to the term of protection, such as that granted in France due to World Wars I and II or in the UK to J.M. Barrie’s “Peter Pan”.

Web based Public Domain Calculators

Once the Flowcharts had been prepared they were translated into code by IViR’s colleagues at Kennisland, thus resulting in the creation of the current set of six web-based Public Domain Helper Tools.

Technically the flowcharts needed to be translated into formats that computers can read. In this project Kennisland choose for an Extensible Markup Language (XML) approach for describing the questions in the flowcharts and the relations between them. The resulting XML documents are both human and computer readable. Using XML documents also allowed Kennisland to keep the decision structure separate from the actual programming language, which makes maintenance of both content and code easier.

Kennisland then needed to build an XML reader that could translate the structures and questions of these XML files into a questionnaire or apply some set of data to the available questions, so as to make the automatic calculation of large datasets possible. For the EuropeanaConnect project Kennisland developed two of these XML readers. The first translates these XML schemes into a graphical user interface tool (this can be found at EuropeanaLabs) and the second can potentially automatically determine the status of a work which resides at the Public Domain Works project mercurial depository on KnowledgeForge. Both of these applications are open source and we encourage people to download, modify and work on these tools.

It should be noted that, as part of Kennisland’s collaboration with the Open Knowledge Foundation, Kennisland is currently assisting in the development of an XML base scheme for automatic determination of the rights status of a work using bibliographic information. Unfortunately however this information alone is usually not enough for the automatic identification on a European level. This is due to the many international treaties that have accumulated over the years; rules for example change depending on whether an author is born in a country party to the Berne convention, an EU Member State or a third country.

It should of course also be noted that there is a limit to the extent to which an electronic tool can replace a case-by-case assessment of the public domain status of a copyrighted work or other protected subject matter in complicated legal situations. The Tools are accordingly accompanied by a disclaimer indicating that they cannot offer an absolute guarantee of legal certainty.

Further fine-tuning is necessary before the Helper Tools are ready to be deployed. For the moment test versions of the electronic Tools can be found here. We invite readers to try these beta tools and give us feedback on the pd-discuss list!

Note from the authors: If the whole construction process for the Flowcharts has highlighted one thing that would be the bewildering complexity of the current rules governing the term of protection for copyright and related rights. Despite the Term Directive’s attempts at creating a level playing field, national legislative idiosyncrasies are still going strong in the post-harmonisation era – a single European term of protection remains very much a chimera. The relevant rules are hardly simple on the level of the individual Member States either. In particular in countries such as the UK and France, the term of protection currently operates under confusing entanglements of rules and exceptions that make the confident calculation of the term of protection almost impossible for a copyright layperson and difficult even for experts.

PD Calculators

Generic copyright flowchart by Christina Angelopoulos. PDF version available from Public Domain Calculators wiki page

Related posts:

  1. Public Domain Calculators Meeting, 10-11th November 2009
  2. The Public Domain and the WIPO Development Agenda
  3. New microshort film on the Public Domain Calculators!