6 minute read.Service Provider perspectives: A few minutes with our publisher hosting platforms
Service Providers work on behalf of our members by creating, registering, querying and/or displaying metadata. We rely on this group to support our schema as it evolves, to roll out new and updated services to members and to work closely with us on a variety of matters of mutual interest. Many of our Service Providers have been with us since the early days of Crossref. Others have joined as scholarly communications has grown and services have evolved. Though fewer than 20 in number, their impact far outweighs the size of the group.
They, like us, work with a great variety of members and have a broad view into publishing trends. In this post, we focus on views from some of the publishing hosting platform Service Providers, who’ve taken the time to share their thoughts on a few questions:
It has become more and more important that not only the DOIs are registered with the minimum of necessary metadata to get the DOIs registered, but that a most complete set of metadata is being sent along – including author identifiers, funding information, abstracts, licenses, to support other Crossref services and improve discoverability.
– de Gruyter
Our clients are increasingly aware of the key role metadata plays in the effective dissemination of research. With an increasing number of published articles and a clear domination of “search engines” and aggregation of content, metadata is the primary means of making sure that publications reach the right audience. Publishers’ value-add includes not just copy editing, formatting, and packaging, but also now creating journal articles for the digital age that are discoverable and well linked to the research corpus. Furthermore, we sense a clear move toward standardization, which goes beyond the structure to introduce standardized semantics: adopting common taxonomies for classifying content in different dimensions. Our response is to introduce effective, automated and consistent services that capture, and surface metadata throughout the value chain from authoring to publication and search.
– Atypon
Highwire’s publishers are always looking to use the latest DTD (Document Type Definition) for the content to stay up to current standards. Currently this would be JATS 1.2. They are choosing to remain current so that they can stay on top of all or new metadata that can enrich their deposits. We have handled this well and offer support for the latest version of DTD when they are released, but some publishers are not always familiar with what can/should be deposited with their content and this can be a learning process for them.
– MPS Limited
In the digital age, metadata is the key to enabling effective content consumption. Publications that cannot be effectively discovered are of little value. We can only increase the impact of research with “discoverable” and “machine readable” publications. So ensuring correct and quality metadata is the key to optimizing not only the processing (finding the right journal, editor, reviewers) but also to positioning each publication properly. As the volume of published scientific research increases, article metadata is the way forward — it brings “order” and enables our community to manage this volume.
– Atypon
Highwire always positions itself as “good content in” means “good content out”. This is true for our own content stores. Strong and valid metadata will result in valid and strong deposits. We explain this to all new clients on-boarded with Highwire and the use of current standards and for current client projects where content should/can be enriched through re-load.
– MPS Limited
Getting our journals to care about metadata is a two step process: First, make sure they understand how metadata will help their journal succeed (i.e. why it matters to them). Second, make it easy for them to produce metadata while minimizing the cost, time, or complexity of their workflow.
The first step – making a case for why metadata matters – is often easier than you’d think. At the very least, most journal editors understand that metadata, e.g., JATS or DOI registration, is an important signifier of professionalism / prestige. In other words, they see that top journals publish metadata and want the same for their journal.
From a more technical standpoint, metadata is important because that’s the format computers understand and, like it or not, the publishing ecosystem relies on computers to deliver all sorts of critical services – such as indexing, archiving, and discoverability. So, if you’re not publishing metadata, you’re likely missing the benefit of these services. The second step – making it easy to produce metadata – is more difficult. Journal editors generally understand metadata matters but often lack the technical skills or resources necessary to create metadata.
This is where a platform, such as Scholastica, can be very helpful. Because platforms work with many journals, they can invest in tools to automate the creation of metadata, reducing costs for all their clients. For example, most platforms offer integrations to support automatic DOI registration. At Scholastica, we’re pushing this idea even further with automatic integration to more complicated services such as PubMed Central. By reducing cost and complexity, we can help new or small-budget journals have the same quality metadata normally reserved for large, established journals.
– Scholastica
We are sending other publishers’ metadata to academic libraries and distribution channels. Erroneous metadata will have a direct impact on how discoverable a title may be. The more uniform and correct the metadata, the better it will be indexed in other places.
– de Gruyter
What is the one industry development or trend you’re most excited about for the near future and why?
Open Science and the ability to deliver research with the tools for reproducing it is the most exciting and game changing trend. Technology has enabled the output of science to transition from two-dimensional printed text delivery into globally accessible and responsive web-based delivery. We are now taking the next steps to further leverage web technology to enhance research output with rich assets ranging from audio and video, datasets, executable code, high-resolution imagery, interactive applications and more. As more assets accompany research publications, viewing these assets as modular, individually citable, and reusable becomes a requirement. We are reviewing the whole research output flow from authoring to publishing, and most importantly to its dissemination through the myriad of discovery tools now available.
– Atypon
The move of everything to the cloud – this is changing and improving our infrastructure, our possibility to scale and to stay on top of technological development.
– de Gruyter
Thanks very much to the interviewees for their time and thoughts. We look forward to working with our entire Service Provider group on questions like these and many more. If you’d like more details, you can read about our Service Provider program or contact me for more information.
Further reading
- Feb 1, 2021 – Event Data: A Plan of Action
- Mar 27, 2020 – Events got the better of us
- May 14, 2021 – Doing more with relationships - via Event Data
- Sep 12, 2018 – Event Data is production ready
- Mar 29, 2018 – Hello, meet Event Data Version 1, and new Product Manager
- Oct 2, 2017 – Publishers, help us capture Events for your content
- Apr 18, 2016 – Crossref Event Data: early preview now available
- Feb 25, 2016 – Event Data: open for your interpretation