This blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen.
As sponsors of this project, we at Crossref are excited to see this work shared out.
The scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce.
When Crossref began over 20 years ago, our members were primarily from the United States and Western Europe, but for several years our membership has been more global and diverse, growing to almost 18,000 organizations around the world, representing 148 countries.
As we continue to grow, finding ways to help organizations participate in Crossref is an important part of our mission and approach. Our goal of creating the Research Nexus—a rich and reusable open network of relationships connecting research organizations, people, things, and actions; a scholarly record that the global community can build on forever, for the benefit of society—can only be achieved by ensuring that participation in Crossref is accessible to all.
In August 2022, the United States Office of Science and Technology Policy (OSTP) issued a memo (PDF) on ensuring free, immediate, and equitable access to federally funded research (a.k.a. the “Nelson memo”). Crossref is particularly interested in and relevant for the areas of this guidance that cover metadata and persistent identifiers—and the infrastructure and services that make them useful.
Funding bodies worldwide are increasingly involved in research infrastructure for dissemination and discovery.
Preprints have become an important tool for rapidly communicating and iterating on research outputs. There is now a range of preprint servers, some subject-specific, some based on a particular geographical area, and others linked to publishers or individual journals in addition to generalist platforms. In 2016 the Crossref schema started to support preprints and since then the number of metadata records has grown to around 16,000 new preprint DOIs per month.
We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.
The results are recorded in crawler reports, which you can access from the depositor report expanded view. If a title has been crawled, the last crawl date is shown in the appropriate column. Crawled DOIs that generate errors will appear as a bold link:
Click Last Crawl Date to view a crawler status report for a title:
The crawler status report lists the following:
Total DOIs: Total number of DOI names for the title in system on last crawl date
Checked: number of DOIs crawled
Confirmed: crawler found both DOI and article title on landing page
Semi-confirmed: crawler found either the DOI or the article title on the landing page
Not Confirmed: crawler did not find DOI nor article title on landing page
Bad: page contains known phrases indicating article is not available (for example, article not found, no longer available)
Login Page: crawler is prompted to log in, no article title or DOI
Exception: indicates error in crawler code
httpCode: resolution attempt results in error (such as 400, 403, 404, 500)
httpFailure: http server connection failed
Select each number to view details. Select re-crawl and enter an email address to crawl again.
Page owner: Isaac Farley | Last updated 2020-April-08