This past year has been a captivating journey of immersion within the Crossref community, a mix of online interactions and meaningful in-person experiences. From the engaging Sustainability Research and Innovation Conference in Port Elizabeth, South Africa, to the impactful webinars conducted globally, this has been more than just a professional endeavour; it has been a personal exploration of collaboration, insights, and a shared commitment to pushing the boundaries of scholarly communication.
One of the challenges that we face in Labs and Research at Crossref is that, as we prototype various tools, we need the community to be able to test them. Often, this involves asking for deposit to a different endpoint or changing the way that a platform works to incorporate a prototype.
The problem is that our community is hugely varied in its technical capacity and level of ability when it comes to modifying their platform.
When each line of code is written it is surrounded by a sea of context: who in the community this is for, what problem we’re trying to solve, what technical assumptions we’re making, what we already tried but didn’t work, how much coffee we’ve had today. All of these have an effect on the software we write.
By the time the next person looks at that code, some of that context will have evaporated.
It turns out that one of the things that is really difficult at Crossref is checking whether a set of Crossref credentials has permission to act on a specific DOI prefix. This is the result of many legacy systems storing various mappings in various different software components, from our Content System through to our CRM. To this end, I wrote a basic application, credcheck, that will allow you to test a Crossref credential against an API.
The DOI error report is sent immediately when a user informs us that they’ve seen a DOI somewhere which doesn’t resolve to a website.
The DOI error report is used for making sure your DOI links go where they’re supposed to. When a user clicks on a DOI that has not been registered, they are sent to a form that collects the DOI, the user’s email address, and any comments the user wants to share.
We compile the DOI error report daily using those reports and comments, and email it to the technical contact at the member responsible for the DOI prefix as a .csv attachment. If you would like the DOI error report to be sent to a different person, please contact us.
The DOI error report .csv file contains (where provided by the user):
DOI - the DOI being reported
URL - the referring URL
REPORTED-DATE - date the DOI was initially reported
USER-EMAIL - email of the user reporting the error
COMMENTS
We find that approximately 2/3 of reported errors are ‘real’ problems. Common reasons why you might get this report include:
you’ve published/distributed a DOI but haven’t registered it
the DOI you published doesn’t match the registered DOI
a link was formatted incorrectly (a . at the end of a DOI, for example)
a user has made a mistake (confusing 1 for l or 0 for O, or cut-and-paste errors)
What should I do with my DOI error report?
Review the .csv file attached to your emailed report, and make sure that no legitimate DOIs are listed. Any legitimate DOIs found in this report should be registered immediately. When a DOI reported via the form is registered, we’ll send out an alert to the reporting user (if they’ve shared their email address with us).
I keep getting DOI error reports for DOIs that I have not published, what do I do about this?
It’s possible that someone is trying to link to your content with the wrong DOI. If you do a web search for the reported DOI you may find the source of your problem - we often find incorrect linking from user-provided content like Wikipedia, or from DOIs inadvertently distributed by members to PubMed. If it’s still a mystery, please contact us.
Page owner: Isaac Farley | Last updated 2020-April-08