Blog

Start citing data now. Not later

Recording data citations supports data reuse and aids research integrity and reproducibility. Crossref makes it easy for our members to submit data citations to support the scholarly record. TL;DR Citations are essential/core metadata that all members should submit for all articles, conference proceedings, preprints, and books. Submitting data citations to Crossref has long been possible. And it’s easy, you just need to: Include data citations in the references section as you would for any other citation Include a DOI or other persistent identifier for the data if it is available - just as you would for any other citation Submit the references to Crossref through the content registration process as you would for any other record And your data citations will flow through all the normal processes that Crossref applies to citations.

Service Provider perspectives: A few minutes with our publisher hosting platforms

Service Providers work on behalf of our members by creating, registering, querying and/or displaying metadata. We rely on this group to support our schema as it evolves, to roll out new and updated services to members and to work closely with us on a variety of matters of mutual interest. Many of our Service Providers have been with us since the early days of Crossref. Others have joined as scholarly communications has grown and services have evolved.

Event Data: A Plan of Action

Event Data uncovers links between Crossref-registered DOIs and diverse places where they are mentioned across the internet. Whereas a citation links one research article to another, events are a way to create links to locations such as news articles, data sets, Wikipedia entries, and social media mentions. We’ve collected events for several years and make them openly available via an API for anyone to access, as well as creating open logs of how we found each event.

Fast, citable feedback: Peer reviews for preprints and other record types

Crossref has supported depositing metadata for preprints since 2016 and peer reviews since 2018. Now we are putting the two together, in fact we will permit peer reviews to be registered for any record type.

EASE Council Post: Rachael Lammey on the Research Nexus

This blog was initially posted on the European Association of Science Editors (EASE) blog: “EASE Council Post: Rachael Lammey on the Research Nexus”. EASE President Duncan Nicholas accurately introduces it as a whole lot of information and insights about metadata and communication standards into one post… I was given a wide brief to decide on the topic of my EASE blog, so I thought I’d write one that tries to encompass everything - I’ll explain what I mean by that.

Events got the better of us

Publisher metadata is one side of the story surrounding research outputs, but conversations, connections and activities that build further around scholarly research, takes place all over the web. We built Event Data to capture, record and make available these ‘Events’ –– providing open, transparent, and traceable information about the provenance and context of every Event. Events are comments, links, shares, bookmarks, references, etc.

What’s your (citations’) style?

Bibliographic references in scientific papers are the end result of a process typically composed of: finding the right document to cite, obtaining its metadata, and formatting the metadata using a specific citation style. This end result, however, does not preserve the information about the citation style used to generate it. Can the citation style be somehow guessed from the reference string only? TL;DR I built an automatic citation style classifier. It classifies a given bibliographic reference string into one of 17 citation styles or “unknown”.

What if I told you that bibliographic references can be structured?

Last year I spent several weeks studying how to automatically match unstructured references to DOIs (you can read about these experiments in my previous blog posts). But what about references that are not in the form of an unstructured string, but rather a structured collection of metadata fields? Are we matching them, and how? Let’s find out.

Underreporting of matched references in Crossref metadata

Geoffrey Bilder

Geoffrey Bilder – 2019 February 05

In APIsCitationMetadata

TL;DR

About 11% of available references in records in our OAI-PMH & REST API don’t have DOIs when they should. We have deployed a fix, but it is running on billions of records, and so we don’t expect it to be complete until mid-April.

Note that the Cited-by API that our members use appears to be unaffected by this problem.

Reference matching: for real this time

In my previous blog post, Matchmaker, matchmaker, make me a match, I compared four approaches for reference matching. The comparison was done using a dataset composed of automatically-generated reference strings. Now it’s time for the matching algorithms to face the real enemy: the unstructured reference strings deposited with Crossref by some members. Are the matching algorithms ready for this challenge? Which algorithm will prove worthy of becoming the guardian of the mighty citation network? Buckle up and enjoy our second matching battle!