For the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers.
Since the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.
In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it’s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!
How about we start with a quiz? Imagine a database of scholarly metadata that needs to be enriched with identifiers, such as ORCIDs or ROR IDs.
We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.
Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:
Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.
On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.
Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.
We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization’s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).
When someone links their data online, or mentions research on a social media site, we capture that event and make it available for anyone to use in their own way. We provide the unprocessed data—you decide how to use it.
Before the expansion of the Internet, most discussion about scholarly content stayed within scholarly content, with articles citing each other. With the growth of online platforms for discussion, publication and social media, we have seen discussions extend into new, non-traditional venues.
Crossref Event Data captures this activity and acts as a hub for the storage and distribution of this data. An event may be a citation in a dataset or patent, a mention in a news article, Wikipedia page or on a blog, or discussion and comment on social media.
How Event Data works
Event Data monitors a range of sources, chosen for their importance in scholarly discussion. We make events available via an API for users to access and interpret. Our aim is to provide context to published works and connect diverse parts of the dialogue around research. Learn more about the sources from which we capture events.
The Event Data API provides raw data about events alongside context: how and where each event was collected. Users can process this data to suit their requirements.
What is Event Data for?
Event Data can be used for a number of different purposes:
Authors can find out where their work has been reused and commented on.
Readers can access more context around published research, including links to supporting documents and commentary that aren’t in a journal article.
Publishers and funders can assess the impact of published research beyond citations.
Service providers can enrich, analyze, interpret and report via their own tools
Data intelligence and analysis organisations can access a broad range of sources with commentary relevant to research articles.
Anyone can contribute to Event Data by mentioning the DOI or URL of a Crossref-registered work in one of the monitored sources. We also welcome third parties who wish to send events or contribute to code that covers new sources. Learn more about contributing to or using Crossref Event Data.
Agreement and fees for Event Data
Event Data is a public API, giving access to raw data, and there are no fees. In the future we will introduce a service-based offering with additional features and benefits. Learn more about the Event Data terms.
What is an event?
In the broadest sense, an event is any time someone refers to a research article with a registered DOI anywhere online. Ideally we would capture all events, but there are limitations:
We can’t monitor the entire Internet, and instead check sites that are most likely to discuss academic content. There are still venues that could be relevant and that we do not cover yet.
Users online refer to academic content in different ways, sometimes using the DOI but more often using the URL or just the article name. We try to decode mentions of DOIs or a publisher website to get a match to an article but it isn’t always possible. This means we may miss mentions of an article even from sources we are tracking.
At present we are not able to track events where no link is included and only the title or other part of the metadata is mentioned.
For Crossref Event Data, an event consists of three parts:
A subject: where was the research mentioned? (such as Wikipedia)
An object: which research was mentioned? (a Crossref or DataCite DOI)
A relationship: how was the research mentioned? (such as cites or discusses)
We determine the relationship from the source of the event, it is an indication of how the subject and object are linked based on broad categories.
Software called agents collect events from various data sources. Most agents are written and operated by Crossref with some code written by our partners. Possible events are passed to the percolator software, which tries to match the event with an object DOI. This process is fully automated.
We perform periodic automated checks to the integrity of the data and update event types. Deduplication is also part of the process performed by the percolator.
To provide transparency, we keep an evidence record about how we matched the object to the subject. Learn more about transparency in Event Data, including links to the open source code and data.
The following agents currently collect data:
Agent/Data source
Event type
Crossref metadata
Relationships, references, and links to DataCite registered content
DataCite metadata
Links to Crossref registered content
Faculty Opinions
Recommendations of research publications
Hypothes.is
Annotations in Hypothes.is
Newsfeed
Discussed in blogs and media
Reddit
Discussed on Reddit
Reddit Links
Discussed on sites linked to in subreddits
Stack Exchange Network
Discussed on StackExchange sites
Wikipedia
References on Wikipedia pages
Wordpress.com
Discussed on Wordpress.com sites
We are planning to increase the number of agents and sources and welcome contact from anyone who can contribute. Patent Event Data was historically collected from The Lens. Events from Twitter were collected until February 2023, note that all Twitter events have been removed from search results in accordance with our contract with Twitter; see the Community Forum for more information.
What Event Data is not
By providing Event Data, Crossref provides an open, transparent information source for the scholarly community and beyond. It is important to understand, however, that it may not be suitable for all potential users. Here are some of the limitations:
It is not a service that provides metrics, collated reports, or offers data analysis.
Crossref does not build applications or website plugins for Event Data, for example for displaying results on publisher websites. We do, however, welcome third parties who wish to develop such platforms.
Event Data collection is fully automated and therefore may contain errors or be incomplete, we cannot provide any guarantees in this regard and users must assess the quality of the data required for their particular use case. There may also be delays between an event occurring and it appearing in Event Data.
Events might be missed due to the limitations of the collection algorithms we use. There is also a small possibility that we link an event to the wrong object.
Event Data does not cover every source of academic discussion. In some cases this is because there is no public access to the data; in others it is because we have not had the capacity to build an agent.
While we hope the data is useful for many purposes, we encourage users to be responsible and exercise caution when making use of Event Data.