4 minute read.The Second Wave
You might have been wondering why I’ve been banging on about XMP here. Why the emphasis on one vendor technology on a blog focussed on an industry linking solution? Well, this post is an attempt to answer that.
Four years ago we at Nature Publishing Group, along with a select few early adopters, started up our RSS news feeds. We chose to use RSS 1.0 as the platform of choice which allowed us to embed a rich metadata term set using multiple schemas - especially Dublin Core and PRISM. We evangelized this much at the time and published documents on XML.com (Jul. ’03) and in D-Lib Magazine (Dec. ’04) as well as speaking about this at various meetings and blogging about it. Since that time many more publishers have come on board and now provide RSS routinely, many of them choosing to enrich their feeds with metadata.
Well, RSS can be seen in hindsight as being the First Wave of projecting a web presence beyond the content platform using standard markup formats. With this embedded metadata a publisher can expand their web footprint and allow users to link back to their content server.
Now, XMP with its potential for embedding metadata in rich media can be seen as a Second Wave. Media assets distributed over the network can now carry along their own metadata and identity which can be leveraged by third-party applications to provide interesting new functionalities and link-back capability. Again a projection of web presence.
(Continues.)
XMP has much in common with RSS 1.0. They are both profiles of RDF/XML. They are both flawed in certain respects because of self-imposed limitations. But they both build on a robust and open data model for the web (RDF) and are reasonably open, at least they are extensible. One (RSS 1.0) was defined in an open process by committee, the other is an open (i.e published) specification provided by a vendor.
From our point of view both specifications are sufficiently advanced to be immediately useful. I’m not sure how one could interact with the further development of either specification. RSS 1.0 is essentially frozen with Atom being posed as a successor technology, although Atom does not conform to the RDF model. (The upshot is that an RSS 1.0 feed can be consumed completely by an RDF-aware application, while an Atom feed would need to be pre-processed before any RDF “goodness” could be gleaned from it.) By contrast, XMP is a vendor-defined technology and alive, if not perhaps kicking. I am unaware of any process to formally contribute to the XMP development apart from shouting from the terraces. None the less, both technologies are usable as is.
It is curious that no consistent packaging (and delivery) of metadata has yet been achieved with HTML, the original web interface. The HTML and elements are employed by publishers with various degrees of consistency. There are also RDF islands that can be embedded within HTML comments (as used e.g. by CC licenses). And then there are COinS objects. But it’s all a bit of a mish-mash to date. Certainly, I don’t recall seeing any guidelines from Crossref as to how machine readable metadata (even markup for the DOI itself) may be embedded within HTML pages, rather than on HTML pages for human readers.
This lack of uniform metadata deployment for HTML pages could be something to do with context. With RSS and XMP we are dealing with remote objects, whereas with HTML we are generally accessing this directly on the content server and so have a semantic context. It could be though that metadata delivery from HTML pages will finally be more uniformly available with the further development of standards such as microformats and especially RDFa, GRDDL, etc. It is also interesting to note that an XMP packet could just as easily be embedded within the HTML page, and if this technology were to be adopted more widely for embedding in other media assets then why not consider the same technology for ordinary web pages?
I can’t help feeling though that XMP has a lot of promise and is very timely. There are only three real obstacles: creating XMP packets, writing them and reading them. To my mind, once one has a good grasp of XMP then creating the packets can be done with common tools. The same, more or less, for reading the packets. I have shown earlier that this is readily achievable. The only major block is writing the packets into media files although there is support for create/write (if patchy) by open source libraries, as well as there being support (perhaps limited) from products for create/write. But, anyway, it’s certainly do-able.