For the third year in a row, Crossref hosted a roundtable on research integrity prior to the Frankfurt book fair. This year the event looked at Crossmark, our tool to display retractions and other post-publication updates to readers.
Since the start of 2024, we have been carrying out a consultation on Crossmark, gathering feedback and input from a range of members. The roundtable discussion was a chance to check and refine some of the conclusions we’ve come to, and gather more suggestions on the way forward.
In our previous blog post in this series, we explained why no metadata matching strategy can return perfect results. Thankfully, however, this does not mean that it’s impossible to know anything about the quality of matching. Indeed, we can (and should!) measure how close (or far) we are from achieving perfection with our matching. Read on to learn how this can be done!
How about we start with a quiz? Imagine a database of scholarly metadata that needs to be enriched with identifiers, such as ORCIDs or ROR IDs.
We’re in year two of the Resourcing Crossref for Future Sustainability (RCFS) research. This report provides an update on progress to date, specifically on research we’ve conducted to better understand the impact of our fees and possible changes.
Crossref is in a good financial position with our current fees, which haven’t increased in 20 years. This project is seeking to future-proof our fees by:
Making fees more equitable Simplifying our complex fee schedule Rebalancing revenue sources In order to review all aspects of our fees, we’ve planned five projects to look into specific aspects of our current fees that may need to change to achieve the goals above.
On behalf of the Nominating Committee, I’m pleased to share the slate of candidates for the 2024 board election.
Each year we do an open call for board interest. This year, the Nominating Committee received 53 submissions from members worldwide to fill four open board seats.
We maintain a balanced board of 8 large member seats and 8 small member seats. Size is determined based on the organization’s membership tier (small members fall in the $0-$1,650 tiers and large members in the $3,900 - $50,000 tiers).
Not sure if you’re using iThenticate v1 or iThenticate v2? More here.
Not sure whether you’re an account administrator? Find out here.
The Submitted Works repository (or Private Repository) is a new feature in iThenticate v2 which is now available to Crossref members. This feature allows users to find similarity not just across Turnitin’s extensive Content Database but also across all previous manuscripts submitted to your iThenticate account for all the journals you work on. This would allow you to find collusion between authors or potential cases of duplicate submissions.
How does this work?
You have received a manuscript from Author 1 and have decided to index this manuscript into your Submitted Works repository. At some point later you receive a new manuscript from Author 2. When generating your Similarity Report, you have decided to check against your Submitted Works repository. There is a paragraph in the manuscript from Author 2 which matches a paragraph in the manuscript from Author 1. This would be highlighted within your Similarity Report as a match against your Submitted Works repository.
By clicking on this match you can see the full text of the submission you’ve matched against:
And details about the submission, such as the name and email address of the user who submitted it, the date it was submitted and the title of the submission:
The ability to see the full source text and the details can both be switched off individually.
As with all matches, they can be excluded from the Sources Overview panel or you can turn off matches against all Submitted Works from the settings: