We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.
Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:
Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.
On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.
Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.
We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization’s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).
In our previous instalments of the blog series about matching (see part 1 and part 2), we explained what metadata matching is, why it is important and described its basic terminology. In this entry, we will discuss a few common beliefs about metadata matching that are often encountered when interacting with users, developers, integrators, and other stakeholders. Spoiler alert: we are calling them myths because these beliefs are not true! Read on to learn why.
We’ve just released an update to our participation report, which provides a view for our members into how they are each working towards best practices in open metadata. Prompted by some of the signatories and organizers of the Barcelona Declaration, which Crossref supports, and with the help of our friends at CWTS Leiden, we have fast-tracked the work to include an updated set of metadata best practices in participation reports for our members.
Participation Reports are a visualization of the metadata that’s available via our free REST API. There’s a separate Participation Report for each member, and each report shows what percentage of that member’s metadata records include 11 key metadata elements. These key elements add context and richness, and help to open up content to easier discovery and wider and more varied use. As a member, you can use Participation Reports to see for yourself where the gaps in your organization’s metadata are, and perhaps compare your performance to others. Participation Reports are free and open to everyone.
How a Participation Report works
There’s a separate Participation Report for each member. Visit Participation Reports and start typing the name of a member under Find a member. A list of member names will appear for you to select from. Behind the scenes, our REST API will pull together a report and output it in a clear, visual way. Please note - it should usually take a maximum of 24 hours for you to see changes to your Participation Reports if you’ve added new records or updated the metadata in your existing records.
You can use the dropdowns near the top of the page to see reports for different publication time periods and work types. Current content includes any records with a publication date in the current calendar year or up to two years previously. For example, in 2024, current content is anything with a publication date in 2024, 2023, or 2022. Anything published in 2021 or earlier is considered back file.
The work types currently covered by Participation Reports are:
Journal articles
Conference papers
Books
Book chapters
Posted content (including preprints)
Reports
Datasets
Standards
Dissertations
The 11 key metadata elements for which Participation Reports calculate each member’s coverage are:
Percentage of records that include reference lists in their metadata.
Why is this important?
Your references are a big part of the story of your content, highlighting its provenance and where it sits in the scholarly map. References give researchers and other users of Crossref metadata a vital data point through which to find your content, which in turn increases the chances of your content being read and used.
Make sure you include abstracts when you register your content - it’s available for everything other than dissertations and reports. For existing records, you can add abstracts by running a full metadata redeposit (update).
ORCID iDs
Percentage of records containing ORCID iDs. These persistent identifiers enable users to precisely identify a researcher’s work - even when that researcher shares a name with someone else, or if they change their name.
Why is this important?
Researcher names are inherently ambiguous. People share names. People change names. People record names differently in different circumstances.
Governments, funding agencies, and institutions are increasingly seeking to account for their research investments. They need to know precisely what research outputs are being produced by the researchers that they fund or employ. ORCID iDs allow this reporting to be done automatically and accurately.
For some funders, ORCID iDs are critical for their research investment auditing, and they are starting to mandate that researchers use ORCID iDs.
Researchers who do not have ORCID iDs included in their Crossref metadata risk not being counted in these audits and reports.
Make sure you ask your authors for their ORCID iD through your submission system and include them when you register your content. There’s a specific element in the XML for ORCID iDs if you register via XML. If you use the web deposit form or if you’re still using the deprecated Metadata Manager, there’s a specific field to complete.
Make sure you collect affiliation details from authors via your submission system and include them in your future Crossref deposits.
For existing records, you can add affiliation metadata by running a full metadata redeposit (update).
ROR IDs
The percentage of registered records that include at least one ROR ID, e.g. in the contributor metadata.
Why is this important?
Affiliation metadata ensures that contributor institutions can be identified and research outputs can be traced by institution.
A ROR ID is a single, unambiguous, standardized organization identifier that will always stay the same. This means that contributor affiliations can be clearly disambiguated and greatly improves the usability of your metadata.
If the submission system you use does not yet support ROR, or if you don’t use a submission system, you’ll still be able to provide ROR IDs in your Crossref metadata. ROR IDs can be added to JATS XML, and many Crossref helper tools support the deposit of ROR IDs. There’s also an OpenRefine reconciler that can map your internal identifiers to ROR identifiers.
If you find that an organization you are looking for is not yet in ROR, please submit a curation request.
For existing records, you can add affiliation metadata by running a full metadata redeposit (update).
Funder Registry IDs
The percentage of registered records that contain the name and Funder Registry ID of at least one of the organizations that funded the research.
Why is this important?
Funding acknowledgements give vital context for users and consumers of your content. Extracting these acknowledgements from your content and adding them to your metadata allows funding organizations to better track the published results of their grants, and allows publishers to analyze the sources of funding for their authors and ensure compliance with funder mandates. And, by using the unique funder IDs from our central Funder Registry, you can help ensure the information is consistent across publishers.
Make sure you collect funder names from authors via your submission system, or extract them from acknowledgement sections. Match the names with the corresponding Funder IDs from our Funder Registry and make sure you include them in your future Crossref deposits.
If your funder isn’t yet in the Funder Registry, please let us know.
The percentage of registered records that contain at least one funding award number - a number assigned by the funding organization to identify the specific piece of funding (the award or grant).
Why is this important?
Funding organizations are able to better track the published results of their grants
Research institutions are able to track the published outputs of their employees
Publishers are able to analyze the sources of funding for their authors and ensure compliance with funder mandates
Everyone benefits from greater transparency on who funded the research, and what the results of the funding were.
Make sure you collect grant IDs from authors via your submission system, or extract them from acknowledgement sections. Make sure you include them in your future Crossref deposits and add them to your existing records using our supplemental metadata upload method.
Crossmark enabled
Percentage of records using the Crossmark service, which gives readers quick and easy access to the current status of an item of content - whether it’s been updated, corrected, or retracted.
Why is this important?
Crossmark gives quick and easy access to the current status of an item of content. With one click, you can see if the content has been updated, corrected, or retracted and can access extra metadata provided by the publisher. It allows you to reassure readers that you’re keeping content up-to-date, and showcases any additional metadata you want readers to view while reading the content.
The percentage of registered records containing full-text URLs in the metadata to help researchers easily locate your content for text and data mining.
Why is this important?
Researchers are increasingly interested in carrying out text and data mining of scholarly content - the automatic analysis and extraction of information from large numbers of documents. If you can make it easier for researchers to mine your content, you will massively increase your discoverability.
There are technical and logistical barriers to text and data mining for scholarly researchers and publishers alike. It is impractical for researchers to negotiate many different websites to locate the full-text that they need. And it doesn’t make sense for each publisher to have a different set of instructions about how to best find the full-text in the required format. All parties benefit from the support of standard APIs and data representations in order to enable text and data mining across both open access and subscription-based publishers.
Our API can be used by researchers to locate the full text of content across publisher sites. Members register these URLs - often including multiple links for different formats such as PDF or XML - and researchers can request them programmatically.
The member remains responsible for actually delivering the full-text of the content requested. This means that open access publishers can simply deliver the requested content, while subscription publishers use their existing access control systems to manage access to full-text content.
The percentage of registered records that contain URLs that point to a license that explains the terms and conditions under which readers can access content.
Why is this important?
Adding the full-text URL into your metadata is of limited value if the researchers can’t determine what they are permitted to do with the full text. This is where the license URLs come in. Members include a link to their use and reuse conditions: whether their own proprietary license, or an open license such as Creative Commons.
The percentage of registered records that include full-text links for the Similarity Check service.
Why is this important?
The Similarity Check service helps you to prevent scholarly and professional plagiarism by providing editorial teams with access to Turnitin’s powerful text comparison tool.
Similarity Check members contribute their own published content to iThenticate’s database of full-text literature via Similarity Check URLs, and this is an obligation of using the service. If members aren’t registering these, they can’t take part in the Similarity Check service.
For future records, make sure you include these URLs as part of your standard metadata deposit. They need to be deposited within the crawler-based collection property, with item crawler iParadigms.
You can add these URLs into your already-deposited DOIs using a resource-only deposit, or by using the Supplemental-Metadata Upload option available with our web deposit form.
Page owner: Lena Stoll | Last updated 2024-October-15